Test Report: Docker_Linux_crio 21966

                    
                      f7c9a93757611cb83a7bfb680dda9add42d627cb:2025-11-23:42464
                    
                

Test fail (37/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 13.77
36 TestAddons/parallel/RegistryCreds 0.4
37 TestAddons/parallel/Ingress 146.63
38 TestAddons/parallel/InspektorGadget 6.25
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 49.54
42 TestAddons/parallel/Headlamp 2.47
43 TestAddons/parallel/CloudSpanner 6.27
44 TestAddons/parallel/LocalPath 10.06
45 TestAddons/parallel/NvidiaDevicePlugin 6.26
46 TestAddons/parallel/Yakd 6.25
47 TestAddons/parallel/AmdGpuDevicePlugin 6.26
97 TestFunctional/parallel/ServiceCmdConnect 602.76
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.93
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.24
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
137 TestFunctional/parallel/ServiceCmd/DeployApp 600.55
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
153 TestFunctional/parallel/ServiceCmd/Format 0.52
154 TestFunctional/parallel/ServiceCmd/URL 0.52
191 TestJSONOutput/pause/Command 2.38
197 TestJSONOutput/unpause/Command 1.63
283 TestPause/serial/Pause 8.81
346 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.83
355 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.31
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.98
359 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2
370 TestStartStop/group/newest-cni/serial/Pause 6.32
376 TestStartStop/group/old-k8s-version/serial/Pause 6.14
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.76
385 TestStartStop/group/no-preload/serial/Pause 5.56
386 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.28
393 TestStartStop/group/embed-certs/serial/Pause 5.74
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable volcano --alsologtostderr -v=1: exit status 11 (247.73232ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:57:36.363782   23775 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:57:36.364236   23775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:36.364246   23775 out.go:374] Setting ErrFile to fd 2...
	I1123 07:57:36.364250   23775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:36.364401   23775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:57:36.364638   23775 mustload.go:66] Loading cluster: addons-959783
	I1123 07:57:36.364962   23775 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:36.364978   23775 addons.go:622] checking whether the cluster is paused
	I1123 07:57:36.365060   23775 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:36.365072   23775 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:57:36.365442   23775 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:57:36.383058   23775 ssh_runner.go:195] Run: systemctl --version
	I1123 07:57:36.383104   23775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:57:36.399960   23775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:57:36.498003   23775 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:57:36.498084   23775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:57:36.528612   23775 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:57:36.528639   23775 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:57:36.528644   23775 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:57:36.528647   23775 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:57:36.528650   23775 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:57:36.528658   23775 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:57:36.528661   23775 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:57:36.528663   23775 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:57:36.528666   23775 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:57:36.528675   23775 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:57:36.528678   23775 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:57:36.528681   23775 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:57:36.528696   23775 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:57:36.528701   23775 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:57:36.528706   23775 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:57:36.528719   23775 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:57:36.528727   23775 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:57:36.528731   23775 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:57:36.528734   23775 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:57:36.528737   23775 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:57:36.528740   23775 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:57:36.528742   23775 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:57:36.528745   23775 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:57:36.528748   23775 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:57:36.528751   23775 cri.go:89] found id: ""
	I1123 07:57:36.528793   23775 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:57:36.542787   23775 out.go:203] 
	W1123 07:57:36.543868   23775 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:57:36.543893   23775 out.go:285] * 
	* 
	W1123 07:57:36.546764   23775 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:57:36.547756   23775 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.251707ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-mq8bw" [e0c7828e-fc45-45aa-b3c4-89e8cad6740e] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.001829532s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-txmj8" [61917d5c-8217-4b89-b9e1-02789e24dd18] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002795948s
addons_test.go:392: (dbg) Run:  kubectl --context addons-959783 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-959783 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-959783 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.317412319s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 ip
2025/11/23 07:57:57 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable registry --alsologtostderr -v=1: exit status 11 (250.170287ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:57:57.881096   25476 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:57:57.881413   25476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:57.881430   25476 out.go:374] Setting ErrFile to fd 2...
	I1123 07:57:57.881436   25476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:57.881717   25476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:57:57.882055   25476 mustload.go:66] Loading cluster: addons-959783
	I1123 07:57:57.882405   25476 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:57.882422   25476 addons.go:622] checking whether the cluster is paused
	I1123 07:57:57.882534   25476 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:57.882553   25476 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:57:57.883057   25476 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:57:57.901074   25476 ssh_runner.go:195] Run: systemctl --version
	I1123 07:57:57.901136   25476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:57:57.919892   25476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:57:58.019730   25476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:57:58.019794   25476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:57:58.047193   25476 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:57:58.047226   25476 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:57:58.047233   25476 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:57:58.047239   25476 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:57:58.047244   25476 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:57:58.047250   25476 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:57:58.047255   25476 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:57:58.047260   25476 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:57:58.047264   25476 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:57:58.047276   25476 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:57:58.047285   25476 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:57:58.047290   25476 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:57:58.047297   25476 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:57:58.047301   25476 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:57:58.047308   25476 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:57:58.047322   25476 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:57:58.047333   25476 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:57:58.047339   25476 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:57:58.047344   25476 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:57:58.047348   25476 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:57:58.047352   25476 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:57:58.047357   25476 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:57:58.047361   25476 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:57:58.047366   25476 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:57:58.047370   25476 cri.go:89] found id: ""
	I1123 07:57:58.047419   25476 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:57:58.061264   25476 out.go:203] 
	W1123 07:57:58.062122   25476 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:57:58.062143   25476 out.go:285] * 
	* 
	W1123 07:57:58.065255   25476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:57:58.066260   25476 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.77s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.955476ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-959783
addons_test.go:332: (dbg) Run:  kubectl --context addons-959783 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (236.3983ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:57:59.894971   26394 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:57:59.895133   26394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:59.895143   26394 out.go:374] Setting ErrFile to fd 2...
	I1123 07:57:59.895147   26394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:59.895316   26394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:57:59.895542   26394 mustload.go:66] Loading cluster: addons-959783
	I1123 07:57:59.895882   26394 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:59.895895   26394 addons.go:622] checking whether the cluster is paused
	I1123 07:57:59.895976   26394 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:59.895987   26394 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:57:59.896327   26394 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:57:59.913419   26394 ssh_runner.go:195] Run: systemctl --version
	I1123 07:57:59.913467   26394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:57:59.929529   26394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:58:00.027637   26394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:58:00.027717   26394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:58:00.056919   26394 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:58:00.056946   26394 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:58:00.056950   26394 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:58:00.056954   26394 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:58:00.056956   26394 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:58:00.056964   26394 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:58:00.056967   26394 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:58:00.056970   26394 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:58:00.056972   26394 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:58:00.056992   26394 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:58:00.056997   26394 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:58:00.057001   26394 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:58:00.057005   26394 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:58:00.057009   26394 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:58:00.057013   26394 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:58:00.057028   26394 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:58:00.057037   26394 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:58:00.057041   26394 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:58:00.057044   26394 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:58:00.057047   26394 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:58:00.057052   26394 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:58:00.057054   26394 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:58:00.057057   26394 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:58:00.057059   26394 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:58:00.057062   26394 cri.go:89] found id: ""
	I1123 07:58:00.057107   26394 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:58:00.069633   26394 out.go:203] 
	W1123 07:58:00.070600   26394 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:58:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:58:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:58:00.070619   26394 out.go:285] * 
	* 
	W1123 07:58:00.073513   26394 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:58:00.074624   26394 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-959783 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-959783 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-959783 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [828d494c-78c8-46cb-9f34-f49c64b314a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [828d494c-78c8-46cb-9f34-f49c64b314a4] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003363293s
I1123 07:58:07.717555   14488 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.326020994s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-959783 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-959783
helpers_test.go:243: (dbg) docker inspect addons-959783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd",
	        "Created": "2025-11-23T07:55:55.928302435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T07:55:55.95678841Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd/854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd-json.log",
	        "Name": "/addons-959783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-959783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-959783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd",
	                "LowerDir": "/var/lib/docker/overlay2/5a929a949d7a3fbf6a37cf0146c1192103ed2bee1529b031c4ed6f5ed4ac4c2d-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a929a949d7a3fbf6a37cf0146c1192103ed2bee1529b031c4ed6f5ed4ac4c2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a929a949d7a3fbf6a37cf0146c1192103ed2bee1529b031c4ed6f5ed4ac4c2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a929a949d7a3fbf6a37cf0146c1192103ed2bee1529b031c4ed6f5ed4ac4c2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-959783",
	                "Source": "/var/lib/docker/volumes/addons-959783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-959783",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-959783",
	                "name.minikube.sigs.k8s.io": "addons-959783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d1069886fc94b823ca0e096eedae1bee7cf5427fc3f81535bf07d028296eb04a",
	            "SandboxKey": "/var/run/docker/netns/d1069886fc94",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-959783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a306ca547acc6c4434ea5deb2d8206350f1225a903e9f5ad0eda5ddcee5b3c23",
	                    "EndpointID": "bb7c8d51d293d902f617a6ac06a03bf1b2219034e7397fc6de5518a168ebd667",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "32:21:cd:96:00:65",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-959783",
	                        "854fc0b8c986"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-959783 -n addons-959783
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-959783 logs -n 25: (1.076596234s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-218443 --alsologtostderr --binary-mirror http://127.0.0.1:42195 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-218443 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ delete  │ -p binary-mirror-218443                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-218443 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ addons  │ enable dashboard -p addons-959783                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-959783                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ start   │ -p addons-959783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:57 UTC │
	│ addons  │ addons-959783 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ ssh     │ addons-959783 ssh cat /opt/local-path-provisioner/pvc-eb41d53f-743e-4287-8190-205dfc85238e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │ 23 Nov 25 07:57 UTC │
	│ addons  │ addons-959783 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-959783 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ ip      │ addons-959783 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │ 23 Nov 25 07:57 UTC │
	│ addons  │ addons-959783 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-959783                                                                                                                                                                                                                                                                                                                                                                                           │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │ 23 Nov 25 07:57 UTC │
	│ addons  │ addons-959783 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:58 UTC │                     │
	│ ssh     │ addons-959783 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:58 UTC │                     │
	│ addons  │ addons-959783 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:58 UTC │                     │
	│ addons  │ addons-959783 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 07:58 UTC │                     │
	│ ip      │ addons-959783 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-959783        │ jenkins │ v1.37.0 │ 23 Nov 25 08:00 UTC │ 23 Nov 25 08:00 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:55:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:55:33.156710   15847 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:55:33.156793   15847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:33.156804   15847 out.go:374] Setting ErrFile to fd 2...
	I1123 07:55:33.156811   15847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:33.156986   15847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:55:33.157523   15847 out.go:368] Setting JSON to false
	I1123 07:55:33.158355   15847 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2280,"bootTime":1763882253,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 07:55:33.158407   15847 start.go:143] virtualization: kvm guest
	I1123 07:55:33.160000   15847 out.go:179] * [addons-959783] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 07:55:33.161382   15847 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 07:55:33.161385   15847 notify.go:221] Checking for updates...
	I1123 07:55:33.163498   15847 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:55:33.164549   15847 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 07:55:33.165464   15847 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 07:55:33.166443   15847 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 07:55:33.167386   15847 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 07:55:33.168631   15847 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:55:33.191423   15847 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 07:55:33.191503   15847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:33.245955   15847 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-23 07:55:33.236560259 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:33.246054   15847 docker.go:319] overlay module found
	I1123 07:55:33.247722   15847 out.go:179] * Using the docker driver based on user configuration
	I1123 07:55:33.248764   15847 start.go:309] selected driver: docker
	I1123 07:55:33.248775   15847 start.go:927] validating driver "docker" against <nil>
	I1123 07:55:33.248789   15847 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 07:55:33.249302   15847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:33.299624   15847 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-23 07:55:33.291203917 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:33.299801   15847 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:55:33.300022   15847 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 07:55:33.301577   15847 out.go:179] * Using Docker driver with root privileges
	I1123 07:55:33.302791   15847 cni.go:84] Creating CNI manager for ""
	I1123 07:55:33.302872   15847 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:55:33.302889   15847 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 07:55:33.302975   15847 start.go:353] cluster config:
	{Name:addons-959783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-959783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 07:55:33.304255   15847 out.go:179] * Starting "addons-959783" primary control-plane node in "addons-959783" cluster
	I1123 07:55:33.305222   15847 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 07:55:33.306287   15847 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 07:55:33.307446   15847 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:55:33.307469   15847 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 07:55:33.307476   15847 cache.go:65] Caching tarball of preloaded images
	I1123 07:55:33.307515   15847 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 07:55:33.307567   15847 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 07:55:33.307583   15847 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 07:55:33.307971   15847 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/config.json ...
	I1123 07:55:33.307996   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/config.json: {Name:mk2fb98b4f63c3df0dc6c7df814c098f300b1dfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:33.322541   15847 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:55:33.322645   15847 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 07:55:33.322660   15847 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 07:55:33.322664   15847 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 07:55:33.322674   15847 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 07:55:33.322678   15847 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 07:55:45.535843   15847 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 07:55:45.535878   15847 cache.go:243] Successfully downloaded all kic artifacts
	I1123 07:55:45.535928   15847 start.go:360] acquireMachinesLock for addons-959783: {Name:mkf4aef4d0f867e43fc9f52726964683306a64ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 07:55:45.536036   15847 start.go:364] duration metric: took 85.826µs to acquireMachinesLock for "addons-959783"
	I1123 07:55:45.536065   15847 start.go:93] Provisioning new machine with config: &{Name:addons-959783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-959783 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 07:55:45.536150   15847 start.go:125] createHost starting for "" (driver="docker")
	I1123 07:55:45.537734   15847 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 07:55:45.537940   15847 start.go:159] libmachine.API.Create for "addons-959783" (driver="docker")
	I1123 07:55:45.537976   15847 client.go:173] LocalClient.Create starting
	I1123 07:55:45.538076   15847 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 07:55:45.586531   15847 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 07:55:45.646616   15847 cli_runner.go:164] Run: docker network inspect addons-959783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 07:55:45.663032   15847 cli_runner.go:211] docker network inspect addons-959783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 07:55:45.663097   15847 network_create.go:284] running [docker network inspect addons-959783] to gather additional debugging logs...
	I1123 07:55:45.663112   15847 cli_runner.go:164] Run: docker network inspect addons-959783
	W1123 07:55:45.678234   15847 cli_runner.go:211] docker network inspect addons-959783 returned with exit code 1
	I1123 07:55:45.678256   15847 network_create.go:287] error running [docker network inspect addons-959783]: docker network inspect addons-959783: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-959783 not found
	I1123 07:55:45.678266   15847 network_create.go:289] output of [docker network inspect addons-959783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-959783 not found
	
	** /stderr **
	I1123 07:55:45.678356   15847 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 07:55:45.693283   15847 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cc5bd0}
	I1123 07:55:45.693320   15847 network_create.go:124] attempt to create docker network addons-959783 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 07:55:45.693375   15847 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-959783 addons-959783
	I1123 07:55:45.734356   15847 network_create.go:108] docker network addons-959783 192.168.49.0/24 created
	I1123 07:55:45.734378   15847 kic.go:121] calculated static IP "192.168.49.2" for the "addons-959783" container
	I1123 07:55:45.734448   15847 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 07:55:45.749039   15847 cli_runner.go:164] Run: docker volume create addons-959783 --label name.minikube.sigs.k8s.io=addons-959783 --label created_by.minikube.sigs.k8s.io=true
	I1123 07:55:45.765125   15847 oci.go:103] Successfully created a docker volume addons-959783
	I1123 07:55:45.765195   15847 cli_runner.go:164] Run: docker run --rm --name addons-959783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-959783 --entrypoint /usr/bin/test -v addons-959783:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 07:55:51.574936   15847 cli_runner.go:217] Completed: docker run --rm --name addons-959783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-959783 --entrypoint /usr/bin/test -v addons-959783:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (5.80968291s)
	I1123 07:55:51.574961   15847 oci.go:107] Successfully prepared a docker volume addons-959783
	I1123 07:55:51.575021   15847 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:55:51.575032   15847 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 07:55:51.575079   15847 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-959783:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 07:55:55.859348   15847 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-959783:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.284238399s)
	I1123 07:55:55.859377   15847 kic.go:203] duration metric: took 4.284340892s to extract preloaded images to volume ...
	W1123 07:55:55.859478   15847 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 07:55:55.859510   15847 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 07:55:55.859558   15847 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 07:55:55.913230   15847 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-959783 --name addons-959783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-959783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-959783 --network addons-959783 --ip 192.168.49.2 --volume addons-959783:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 07:55:56.208190   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Running}}
	I1123 07:55:56.226653   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:55:56.243915   15847 cli_runner.go:164] Run: docker exec addons-959783 stat /var/lib/dpkg/alternatives/iptables
	I1123 07:55:56.286955   15847 oci.go:144] the created container "addons-959783" has a running status.
	I1123 07:55:56.286985   15847 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa...
	I1123 07:55:56.428216   15847 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 07:55:56.455347   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:55:56.475310   15847 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 07:55:56.475334   15847 kic_runner.go:114] Args: [docker exec --privileged addons-959783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 07:55:56.527269   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:55:56.548370   15847 machine.go:94] provisionDockerMachine start ...
	I1123 07:55:56.548453   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:56.567568   15847 main.go:143] libmachine: Using SSH client type: native
	I1123 07:55:56.567932   15847 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 07:55:56.567952   15847 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 07:55:56.710791   15847 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-959783
	
	I1123 07:55:56.710822   15847 ubuntu.go:182] provisioning hostname "addons-959783"
	I1123 07:55:56.710877   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:56.729728   15847 main.go:143] libmachine: Using SSH client type: native
	I1123 07:55:56.730053   15847 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 07:55:56.730077   15847 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-959783 && echo "addons-959783" | sudo tee /etc/hostname
	I1123 07:55:56.878596   15847 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-959783
	
	I1123 07:55:56.878663   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:56.897038   15847 main.go:143] libmachine: Using SSH client type: native
	I1123 07:55:56.897304   15847 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 07:55:56.897332   15847 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-959783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-959783/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-959783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 07:55:57.034156   15847 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 07:55:57.034182   15847 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 07:55:57.034218   15847 ubuntu.go:190] setting up certificates
	I1123 07:55:57.034234   15847 provision.go:84] configureAuth start
	I1123 07:55:57.034282   15847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-959783
	I1123 07:55:57.050842   15847 provision.go:143] copyHostCerts
	I1123 07:55:57.050901   15847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 07:55:57.051014   15847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 07:55:57.051086   15847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 07:55:57.051151   15847 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.addons-959783 san=[127.0.0.1 192.168.49.2 addons-959783 localhost minikube]
	I1123 07:55:57.189965   15847 provision.go:177] copyRemoteCerts
	I1123 07:55:57.190010   15847 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 07:55:57.190039   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.205585   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:55:57.303652   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 07:55:57.320794   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 07:55:57.336155   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 07:55:57.351411   15847 provision.go:87] duration metric: took 317.163506ms to configureAuth
	I1123 07:55:57.351437   15847 ubuntu.go:206] setting minikube options for container-runtime
	I1123 07:55:57.351585   15847 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:55:57.351675   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.368431   15847 main.go:143] libmachine: Using SSH client type: native
	I1123 07:55:57.368628   15847 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 07:55:57.368646   15847 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 07:55:57.634346   15847 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 07:55:57.634375   15847 machine.go:97] duration metric: took 1.085984545s to provisionDockerMachine
	I1123 07:55:57.634390   15847 client.go:176] duration metric: took 12.096402844s to LocalClient.Create
	I1123 07:55:57.634415   15847 start.go:167] duration metric: took 12.09647401s to libmachine.API.Create "addons-959783"
	I1123 07:55:57.634427   15847 start.go:293] postStartSetup for "addons-959783" (driver="docker")
	I1123 07:55:57.634441   15847 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 07:55:57.634512   15847 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 07:55:57.634562   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.651185   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:55:57.750204   15847 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 07:55:57.753127   15847 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 07:55:57.753148   15847 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 07:55:57.753158   15847 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 07:55:57.753208   15847 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 07:55:57.753231   15847 start.go:296] duration metric: took 118.797117ms for postStartSetup
	I1123 07:55:57.753482   15847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-959783
	I1123 07:55:57.770391   15847 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/config.json ...
	I1123 07:55:57.770645   15847 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 07:55:57.770713   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.786281   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:55:57.880923   15847 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 07:55:57.884924   15847 start.go:128] duration metric: took 12.348761677s to createHost
	I1123 07:55:57.884946   15847 start.go:83] releasing machines lock for "addons-959783", held for 12.348894314s
	I1123 07:55:57.884999   15847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-959783
	I1123 07:55:57.900652   15847 ssh_runner.go:195] Run: cat /version.json
	I1123 07:55:57.900712   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.900741   15847 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 07:55:57.900815   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.917123   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:55:57.917896   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:55:58.063638   15847 ssh_runner.go:195] Run: systemctl --version
	I1123 07:55:58.069307   15847 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 07:55:58.099821   15847 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 07:55:58.103846   15847 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 07:55:58.103888   15847 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 07:55:58.126345   15847 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 07:55:58.126362   15847 start.go:496] detecting cgroup driver to use...
	I1123 07:55:58.126392   15847 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 07:55:58.126432   15847 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 07:55:58.140333   15847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 07:55:58.150824   15847 docker.go:218] disabling cri-docker service (if available) ...
	I1123 07:55:58.150865   15847 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 07:55:58.165184   15847 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 07:55:58.180098   15847 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 07:55:58.259944   15847 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 07:55:58.340461   15847 docker.go:234] disabling docker service ...
	I1123 07:55:58.340513   15847 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 07:55:58.356115   15847 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 07:55:58.367059   15847 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 07:55:58.443671   15847 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 07:55:58.518994   15847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 07:55:58.529646   15847 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 07:55:58.542104   15847 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 07:55:58.542154   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.551010   15847 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 07:55:58.551051   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.558715   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.566137   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.573573   15847 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 07:55:58.580451   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.587778   15847 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.599655   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.607079   15847 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 07:55:58.613355   15847 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1123 07:55:58.613389   15847 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1123 07:55:58.623831   15847 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 07:55:58.630202   15847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:55:58.700718   15847 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 07:55:58.827515   15847 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 07:55:58.827583   15847 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 07:55:58.831266   15847 start.go:564] Will wait 60s for crictl version
	I1123 07:55:58.831310   15847 ssh_runner.go:195] Run: which crictl
	I1123 07:55:58.834550   15847 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 07:55:58.855854   15847 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 07:55:58.855957   15847 ssh_runner.go:195] Run: crio --version
	I1123 07:55:58.881234   15847 ssh_runner.go:195] Run: crio --version
	I1123 07:55:58.907281   15847 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 07:55:58.908341   15847 cli_runner.go:164] Run: docker network inspect addons-959783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 07:55:58.924162   15847 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 07:55:58.927673   15847 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 07:55:58.936828   15847 kubeadm.go:884] updating cluster {Name:addons-959783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-959783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 07:55:58.936916   15847 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:55:58.936952   15847 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 07:55:58.965029   15847 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 07:55:58.965044   15847 crio.go:433] Images already preloaded, skipping extraction
	I1123 07:55:58.965077   15847 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 07:55:58.987295   15847 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 07:55:58.987312   15847 cache_images.go:86] Images are preloaded, skipping loading
	I1123 07:55:58.987321   15847 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 07:55:58.987405   15847 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-959783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-959783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 07:55:58.987490   15847 ssh_runner.go:195] Run: crio config
	I1123 07:55:59.027327   15847 cni.go:84] Creating CNI manager for ""
	I1123 07:55:59.027353   15847 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:55:59.027369   15847 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 07:55:59.027389   15847 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-959783 NodeName:addons-959783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 07:55:59.027511   15847 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-959783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 07:55:59.027572   15847 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 07:55:59.034633   15847 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 07:55:59.034679   15847 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 07:55:59.041561   15847 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 07:55:59.052801   15847 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 07:55:59.066318   15847 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1123 07:55:59.077257   15847 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 07:55:59.080374   15847 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 07:55:59.088928   15847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:55:59.164003   15847 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 07:55:59.186570   15847 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783 for IP: 192.168.49.2
	I1123 07:55:59.186589   15847 certs.go:195] generating shared ca certs ...
	I1123 07:55:59.186608   15847 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.186752   15847 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 07:55:59.312301   15847 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt ...
	I1123 07:55:59.312322   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt: {Name:mkf9ae3aa353a1038c3c9284f3b747dfb88e5a7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.312457   15847 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key ...
	I1123 07:55:59.312467   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key: {Name:mk2a71d7a34a8fc26d229e9c3bec7fe566491a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.312537   15847 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 07:55:59.357772   15847 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt ...
	I1123 07:55:59.357791   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt: {Name:mk1712ce5ec45204d6baf790505c850656fa6dfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.357948   15847 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key ...
	I1123 07:55:59.357960   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key: {Name:mka6eeff402c2a4034a73a12e7cc509daf81884d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.358028   15847 certs.go:257] generating profile certs ...
	I1123 07:55:59.358091   15847 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.key
	I1123 07:55:59.358106   15847 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt with IP's: []
	I1123 07:55:59.395650   15847 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt ...
	I1123 07:55:59.395665   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: {Name:mk67858a934f6b320447a88246696849506d01ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.395778   15847 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.key ...
	I1123 07:55:59.395788   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.key: {Name:mkf39414074f91513fe9b576d592bf8e68eec103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.395851   15847 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key.7ae8d156
	I1123 07:55:59.395868   15847 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt.7ae8d156 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 07:55:59.424071   15847 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt.7ae8d156 ...
	I1123 07:55:59.424084   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt.7ae8d156: {Name:mk26beb0192f2f4e60dbbbd4abed4e3d12e48fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.424169   15847 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key.7ae8d156 ...
	I1123 07:55:59.424181   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key.7ae8d156: {Name:mke123914847a03e89813cba5428a8cf87a25d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.424243   15847 certs.go:382] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt.7ae8d156 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt
	I1123 07:55:59.424322   15847 certs.go:386] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key.7ae8d156 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key
	I1123 07:55:59.424372   15847 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.key
	I1123 07:55:59.424388   15847 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.crt with IP's: []
	I1123 07:55:59.524614   15847 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.crt ...
	I1123 07:55:59.524633   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.crt: {Name:mka0e42674bf934edeecfcd2657510a7d7d26a26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.524755   15847 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.key ...
	I1123 07:55:59.524766   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.key: {Name:mk38ad5b056b853f9ef7993f6960383df204de9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.524935   15847 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 07:55:59.524969   15847 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 07:55:59.524995   15847 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 07:55:59.525019   15847 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 07:55:59.525545   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 07:55:59.542197   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 07:55:59.558015   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 07:55:59.573485   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 07:55:59.588804   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 07:55:59.603886   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 07:55:59.619248   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 07:55:59.634502   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 07:55:59.649842   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 07:55:59.666797   15847 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 07:55:59.677772   15847 ssh_runner.go:195] Run: openssl version
	I1123 07:55:59.683231   15847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 07:55:59.692799   15847 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:55:59.695974   15847 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:55:59.696013   15847 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:55:59.728927   15847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 07:55:59.736451   15847 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 07:55:59.739474   15847 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 07:55:59.739525   15847 kubeadm.go:401] StartCluster: {Name:addons-959783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-959783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 07:55:59.739595   15847 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:55:59.739645   15847 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:55:59.764548   15847 cri.go:89] found id: ""
	I1123 07:55:59.764598   15847 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 07:55:59.771557   15847 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 07:55:59.778360   15847 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 07:55:59.778427   15847 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 07:55:59.785079   15847 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 07:55:59.785093   15847 kubeadm.go:158] found existing configuration files:
	
	I1123 07:55:59.785124   15847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 07:55:59.791672   15847 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 07:55:59.791721   15847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 07:55:59.798113   15847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 07:55:59.804641   15847 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 07:55:59.804675   15847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 07:55:59.811086   15847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 07:55:59.817832   15847 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 07:55:59.817883   15847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 07:55:59.824288   15847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 07:55:59.830726   15847 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 07:55:59.830766   15847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 07:55:59.837047   15847 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 07:55:59.870472   15847 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 07:55:59.870547   15847 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 07:55:59.898778   15847 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 07:55:59.898864   15847 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 07:55:59.898916   15847 kubeadm.go:319] OS: Linux
	I1123 07:55:59.898972   15847 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 07:55:59.899035   15847 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 07:55:59.899099   15847 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 07:55:59.899162   15847 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 07:55:59.899230   15847 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 07:55:59.899303   15847 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 07:55:59.899394   15847 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 07:55:59.899472   15847 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 07:55:59.951560   15847 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 07:55:59.951697   15847 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 07:55:59.951835   15847 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 07:55:59.958446   15847 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 07:55:59.960281   15847 out.go:252]   - Generating certificates and keys ...
	I1123 07:55:59.960384   15847 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 07:55:59.960475   15847 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 07:56:00.567610   15847 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 07:56:00.781734   15847 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 07:56:01.026817   15847 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 07:56:01.633126   15847 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 07:56:01.727367   15847 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 07:56:01.727515   15847 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-959783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 07:56:02.199062   15847 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 07:56:02.199220   15847 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-959783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 07:56:02.401047   15847 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 07:56:02.885187   15847 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 07:56:03.085556   15847 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 07:56:03.085644   15847 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 07:56:03.166832   15847 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 07:56:03.582969   15847 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 07:56:03.870715   15847 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 07:56:04.161284   15847 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 07:56:04.462888   15847 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 07:56:04.463364   15847 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 07:56:04.466793   15847 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 07:56:04.468064   15847 out.go:252]   - Booting up control plane ...
	I1123 07:56:04.468145   15847 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 07:56:04.468208   15847 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 07:56:04.468805   15847 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 07:56:04.481217   15847 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 07:56:04.481315   15847 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 07:56:04.487202   15847 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 07:56:04.487442   15847 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 07:56:04.487489   15847 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 07:56:04.580716   15847 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 07:56:04.580886   15847 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 07:56:05.582129   15847 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001571554s
	I1123 07:56:05.585830   15847 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 07:56:05.585948   15847 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 07:56:05.586060   15847 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 07:56:05.586187   15847 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 07:56:07.482152   15847 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.896256032s
	I1123 07:56:07.843937   15847 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.258082644s
	I1123 07:56:09.587741   15847 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00182609s
	I1123 07:56:09.597413   15847 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 07:56:09.605848   15847 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 07:56:09.613241   15847 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 07:56:09.613490   15847 kubeadm.go:319] [mark-control-plane] Marking the node addons-959783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 07:56:09.619770   15847 kubeadm.go:319] [bootstrap-token] Using token: 3f5cqk.xr5m0zrekevhko6l
	I1123 07:56:09.620991   15847 out.go:252]   - Configuring RBAC rules ...
	I1123 07:56:09.621157   15847 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 07:56:09.623460   15847 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 07:56:09.627525   15847 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 07:56:09.630594   15847 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 07:56:09.632619   15847 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 07:56:09.634531   15847 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 07:56:09.992927   15847 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 07:56:10.404794   15847 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 07:56:10.992793   15847 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 07:56:10.993858   15847 kubeadm.go:319] 
	I1123 07:56:10.993946   15847 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 07:56:10.993956   15847 kubeadm.go:319] 
	I1123 07:56:10.994066   15847 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 07:56:10.994076   15847 kubeadm.go:319] 
	I1123 07:56:10.994118   15847 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 07:56:10.994218   15847 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 07:56:10.994306   15847 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 07:56:10.994318   15847 kubeadm.go:319] 
	I1123 07:56:10.994398   15847 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 07:56:10.994407   15847 kubeadm.go:319] 
	I1123 07:56:10.994471   15847 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 07:56:10.994480   15847 kubeadm.go:319] 
	I1123 07:56:10.994549   15847 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 07:56:10.994652   15847 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 07:56:10.994765   15847 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 07:56:10.994774   15847 kubeadm.go:319] 
	I1123 07:56:10.994883   15847 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 07:56:10.994982   15847 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 07:56:10.994990   15847 kubeadm.go:319] 
	I1123 07:56:10.995085   15847 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3f5cqk.xr5m0zrekevhko6l \
	I1123 07:56:10.995186   15847 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c \
	I1123 07:56:10.995220   15847 kubeadm.go:319] 	--control-plane 
	I1123 07:56:10.995231   15847 kubeadm.go:319] 
	I1123 07:56:10.995338   15847 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 07:56:10.995350   15847 kubeadm.go:319] 
	I1123 07:56:10.995449   15847 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3f5cqk.xr5m0zrekevhko6l \
	I1123 07:56:10.995566   15847 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c 
	I1123 07:56:10.997091   15847 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 07:56:10.997205   15847 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 07:56:10.997234   15847 cni.go:84] Creating CNI manager for ""
	I1123 07:56:10.997245   15847 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:56:10.999367   15847 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 07:56:11.000431   15847 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 07:56:11.004842   15847 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 07:56:11.004859   15847 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 07:56:11.016560   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 07:56:11.199572   15847 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 07:56:11.199638   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:11.199682   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-959783 minikube.k8s.io/updated_at=2025_11_23T07_56_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=addons-959783 minikube.k8s.io/primary=true
	I1123 07:56:11.209467   15847 ops.go:34] apiserver oom_adj: -16
	I1123 07:56:11.273678   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:11.774332   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:12.273731   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:12.773788   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:13.274151   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:13.774748   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:14.274471   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:14.773959   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:15.274487   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:15.773737   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:15.844058   15847 kubeadm.go:1114] duration metric: took 4.644475758s to wait for elevateKubeSystemPrivileges
	I1123 07:56:15.844092   15847 kubeadm.go:403] duration metric: took 16.104570337s to StartCluster
	I1123 07:56:15.844113   15847 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:15.844232   15847 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 07:56:15.844843   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:15.845228   15847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 07:56:15.845281   15847 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 07:56:15.845372   15847 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 07:56:15.845438   15847 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:56:15.845507   15847 addons.go:70] Setting gcp-auth=true in profile "addons-959783"
	I1123 07:56:15.845514   15847 addons.go:70] Setting yakd=true in profile "addons-959783"
	I1123 07:56:15.845526   15847 mustload.go:66] Loading cluster: addons-959783
	I1123 07:56:15.845529   15847 addons.go:239] Setting addon yakd=true in "addons-959783"
	I1123 07:56:15.845550   15847 addons.go:70] Setting registry=true in profile "addons-959783"
	I1123 07:56:15.845564   15847 addons.go:70] Setting registry-creds=true in profile "addons-959783"
	I1123 07:56:15.845573   15847 addons.go:239] Setting addon registry=true in "addons-959783"
	I1123 07:56:15.845586   15847 addons.go:239] Setting addon registry-creds=true in "addons-959783"
	I1123 07:56:15.845601   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.845620   15847 addons.go:70] Setting volcano=true in profile "addons-959783"
	I1123 07:56:15.845636   15847 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-959783"
	I1123 07:56:15.845656   15847 addons.go:239] Setting addon volcano=true in "addons-959783"
	I1123 07:56:15.845662   15847 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:56:15.845672   15847 addons.go:70] Setting storage-provisioner=true in profile "addons-959783"
	I1123 07:56:15.845702   15847 addons.go:239] Setting addon storage-provisioner=true in "addons-959783"
	I1123 07:56:15.845742   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.845747   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.845773   15847 addons.go:70] Setting cloud-spanner=true in profile "addons-959783"
	I1123 07:56:15.845841   15847 addons.go:239] Setting addon cloud-spanner=true in "addons-959783"
	I1123 07:56:15.845886   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.845998   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.846229   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.846391   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.845556   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.846776   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.845657   15847 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-959783"
	I1123 07:56:15.846928   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.847269   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.847603   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.847742   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.846038   15847 addons.go:70] Setting inspektor-gadget=true in profile "addons-959783"
	I1123 07:56:15.848007   15847 addons.go:239] Setting addon inspektor-gadget=true in "addons-959783"
	I1123 07:56:15.848037   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.848158   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.846059   15847 addons.go:70] Setting ingress-dns=true in profile "addons-959783"
	I1123 07:56:15.848433   15847 addons.go:239] Setting addon ingress-dns=true in "addons-959783"
	I1123 07:56:15.848473   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.848656   15847 out.go:179] * Verifying Kubernetes components...
	I1123 07:56:15.846069   15847 addons.go:70] Setting volumesnapshots=true in profile "addons-959783"
	I1123 07:56:15.848914   15847 addons.go:239] Setting addon volumesnapshots=true in "addons-959783"
	I1123 07:56:15.849003   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.846048   15847 addons.go:70] Setting ingress=true in profile "addons-959783"
	I1123 07:56:15.849080   15847 addons.go:239] Setting addon ingress=true in "addons-959783"
	I1123 07:56:15.849111   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.849561   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.850062   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.846087   15847 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-959783"
	I1123 07:56:15.850458   15847 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-959783"
	I1123 07:56:15.846097   15847 addons.go:70] Setting metrics-server=true in profile "addons-959783"
	I1123 07:56:15.846129   15847 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-959783"
	I1123 07:56:15.846139   15847 addons.go:70] Setting default-storageclass=true in profile "addons-959783"
	I1123 07:56:15.846076   15847 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-959783"
	I1123 07:56:15.850747   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.850868   15847 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-959783"
	I1123 07:56:15.850897   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.850923   15847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:56:15.851321   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.851326   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.850749   15847 addons.go:239] Setting addon metrics-server=true in "addons-959783"
	I1123 07:56:15.852553   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.850801   15847 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-959783"
	I1123 07:56:15.853291   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.853943   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.850819   15847 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-959783"
	I1123 07:56:15.855668   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.855730   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.861424   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.864312   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	W1123 07:56:15.892065   15847 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 07:56:15.917887   15847 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 07:56:15.918129   15847 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 07:56:15.919257   15847 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 07:56:15.919847   15847 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 07:56:15.919865   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 07:56:15.919921   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.921162   15847 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 07:56:15.922059   15847 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 07:56:15.922072   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 07:56:15.922142   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.922573   15847 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 07:56:15.922795   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 07:56:15.923155   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.926699   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.932718   15847 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:56:15.935229   15847 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 07:56:15.937291   15847 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:56:15.938678   15847 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-959783"
	I1123 07:56:15.938731   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.939060   15847 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 07:56:15.939105   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 07:56:15.939171   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.939174   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.943699   15847 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 07:56:15.944722   15847 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 07:56:15.944765   15847 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 07:56:15.944834   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.957529   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 07:56:15.957645   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 07:56:15.958741   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 07:56:15.958758   15847 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 07:56:15.958811   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.958962   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 07:56:15.960043   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 07:56:15.961154   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 07:56:15.963614   15847 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 07:56:15.963915   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 07:56:15.966316   15847 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 07:56:15.966337   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 07:56:15.966387   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.966462   15847 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 07:56:15.967725   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 07:56:15.967761   15847 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 07:56:15.967776   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 07:56:15.967847   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.971386   15847 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 07:56:15.971961   15847 addons.go:239] Setting addon default-storageclass=true in "addons-959783"
	I1123 07:56:15.972158   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.972453   15847 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 07:56:15.972465   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 07:56:15.972506   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.973793   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 07:56:15.974007   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.976177   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 07:56:15.976762   15847 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 07:56:15.980369   15847 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 07:56:15.980418   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 07:56:15.980494   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.983792   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 07:56:15.983812   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 07:56:15.983860   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.988218   15847 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 07:56:15.989269   15847 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 07:56:15.989287   15847 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 07:56:15.989336   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:16.002397   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.004958   15847 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 07:56:16.006324   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.007000   15847 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 07:56:16.009145   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 07:56:16.009198   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:16.014581   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.017837   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.024846   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.030006   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.042760   15847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 07:56:16.047148   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.047727   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.056510   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.058822   15847 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 07:56:16.058856   15847 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 07:56:16.058902   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:16.064793   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.067834   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.069224   15847 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 07:56:16.070339   15847 out.go:179]   - Using image docker.io/busybox:stable
	I1123 07:56:16.071702   15847 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 07:56:16.071725   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 07:56:16.071780   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	W1123 07:56:16.073091   15847 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 07:56:16.073117   15847 retry.go:31] will retry after 264.431648ms: ssh: handshake failed: EOF
	I1123 07:56:16.080045   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.096321   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	W1123 07:56:16.099369   15847 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 07:56:16.099393   15847 retry.go:31] will retry after 343.425901ms: ssh: handshake failed: EOF
	I1123 07:56:16.104499   15847 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 07:56:16.117096   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.119006   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.188657   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 07:56:16.189794   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 07:56:16.194133   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 07:56:16.210247   15847 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 07:56:16.210277   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 07:56:16.214026   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 07:56:16.224957   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 07:56:16.237052   15847 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 07:56:16.237075   15847 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 07:56:16.239175   15847 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 07:56:16.239248   15847 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 07:56:16.239561   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 07:56:16.239575   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 07:56:16.245835   15847 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 07:56:16.245904   15847 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 07:56:16.251175   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 07:56:16.266499   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 07:56:16.275491   15847 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 07:56:16.275580   15847 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 07:56:16.278425   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 07:56:16.278647   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 07:56:16.278616   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 07:56:16.287596   15847 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 07:56:16.287659   15847 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 07:56:16.295311   15847 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 07:56:16.295327   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 07:56:16.298645   15847 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 07:56:16.298702   15847 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 07:56:16.315674   15847 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 07:56:16.315871   15847 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 07:56:16.315801   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 07:56:16.315929   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 07:56:16.343193   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 07:56:16.350131   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 07:56:16.350153   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 07:56:16.352213   15847 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 07:56:16.352228   15847 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 07:56:16.359450   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 07:56:16.381927   15847 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 07:56:16.381954   15847 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 07:56:16.398847   15847 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 07:56:16.400630   15847 node_ready.go:35] waiting up to 6m0s for node "addons-959783" to be "Ready" ...
	I1123 07:56:16.400971   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 07:56:16.400994   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 07:56:16.418413   15847 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 07:56:16.418434   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 07:56:16.434577   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 07:56:16.434598   15847 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 07:56:16.466787   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 07:56:16.466805   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 07:56:16.496838   15847 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:56:16.496860   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 07:56:16.501325   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 07:56:16.517772   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 07:56:16.517851   15847 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 07:56:16.530047   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:56:16.564361   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 07:56:16.564437   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 07:56:16.571490   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 07:56:16.660436   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 07:56:16.660460   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 07:56:16.679931   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 07:56:16.685727   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 07:56:16.685821   15847 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 07:56:16.716613   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 07:56:16.908258   15847 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-959783" context rescaled to 1 replicas
	I1123 07:56:17.382406   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.193648945s)
	I1123 07:56:17.382445   15847 addons.go:495] Verifying addon ingress=true in "addons-959783"
	I1123 07:56:17.382593   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.188418241s)
	I1123 07:56:17.382534   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.192712172s)
	I1123 07:56:17.382663   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.168611763s)
	I1123 07:56:17.382728   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.157742809s)
	I1123 07:56:17.382776   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.131543831s)
	I1123 07:56:17.382824   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.116305488s)
	I1123 07:56:17.382895   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.104177084s)
	I1123 07:56:17.382992   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.0397759s)
	I1123 07:56:17.383012   15847 addons.go:495] Verifying addon registry=true in "addons-959783"
	I1123 07:56:17.383072   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.023597025s)
	I1123 07:56:17.383165   15847 addons.go:495] Verifying addon metrics-server=true in "addons-959783"
	I1123 07:56:17.384874   15847 out.go:179] * Verifying ingress addon...
	I1123 07:56:17.384885   15847 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-959783 service yakd-dashboard -n yakd-dashboard
	
	I1123 07:56:17.384958   15847 out.go:179] * Verifying registry addon...
	I1123 07:56:17.387059   15847 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 07:56:17.387063   15847 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W1123 07:56:17.388266   15847 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1123 07:56:17.389588   15847 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 07:56:17.389602   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:17.389862   15847 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 07:56:17.389875   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:17.749879   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.219792764s)
	I1123 07:56:17.749917   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.178406529s)
	W1123 07:56:17.749931   15847 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 07:56:17.749957   15847 retry.go:31] will retry after 205.163164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 07:56:17.750006   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.069973495s)
	I1123 07:56:17.750175   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.033527273s)
	I1123 07:56:17.750191   15847 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-959783"
	I1123 07:56:17.751448   15847 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 07:56:17.753513   15847 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 07:56:17.755751   15847 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 07:56:17.755771   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:17.890088   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:17.890200   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:17.955649   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:56:18.255843   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:18.389377   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:18.389526   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:18.403317   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:18.756311   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:18.889579   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:18.889681   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:19.255518   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:19.390034   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:19.390222   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:19.756383   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:19.889495   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:19.889746   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:20.256292   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:20.360403   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.404714891s)
	I1123 07:56:20.389919   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:20.390109   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:20.756121   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:20.889313   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:20.889492   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:20.902826   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:21.255989   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:21.389982   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:21.390158   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:21.756673   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:21.889612   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:21.889779   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:22.256624   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:22.389521   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:22.389709   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:22.756012   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:22.890006   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:22.890151   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:23.255850   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:23.389203   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:23.389355   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:23.402815   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:23.535718   15847 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 07:56:23.535781   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:23.552345   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:23.661619   15847 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 07:56:23.673167   15847 addons.go:239] Setting addon gcp-auth=true in "addons-959783"
	I1123 07:56:23.673222   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:23.673554   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:23.690736   15847 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 07:56:23.690783   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:23.706743   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:23.757019   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:23.802566   15847 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:56:23.803597   15847 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 07:56:23.804679   15847 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 07:56:23.804706   15847 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 07:56:23.816750   15847 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 07:56:23.816765   15847 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 07:56:23.828366   15847 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 07:56:23.828383   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 07:56:23.839875   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 07:56:23.890365   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:23.890445   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:24.125585   15847 addons.go:495] Verifying addon gcp-auth=true in "addons-959783"
	I1123 07:56:24.126911   15847 out.go:179] * Verifying gcp-auth addon...
	I1123 07:56:24.128444   15847 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 07:56:24.131431   15847 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 07:56:24.131447   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:24.256339   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:24.389469   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:24.389748   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:24.631614   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:24.755711   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:24.889779   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:24.890029   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:25.130998   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:25.256066   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:25.390075   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:25.390298   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:25.402877   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:25.631106   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:25.756213   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:25.889533   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:25.889569   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:26.131230   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:26.256371   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:26.389399   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:26.389585   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:26.631336   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:26.756525   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:26.889647   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:26.889958   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:27.130942   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:27.256299   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:27.389013   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:27.389247   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:27.630638   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:27.755484   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:27.889629   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:27.889696   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:27.903229   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:28.131540   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:28.255658   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:28.389742   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:28.389925   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:28.632090   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:28.755946   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:28.890062   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:28.890102   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:29.130765   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:29.256197   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:29.389562   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:29.389637   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:29.631806   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:29.755816   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:29.889958   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:29.890018   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:30.130648   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:30.255790   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:30.389920   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:30.390175   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:30.402595   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:30.631009   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:30.756076   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:30.889436   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:30.889516   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:31.131392   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:31.256824   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:31.389760   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:31.390019   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:31.630964   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:31.756130   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:31.889071   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:31.889218   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:32.131056   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:32.256399   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:32.389441   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:32.389614   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:32.403240   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:32.631573   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:32.755634   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:32.889832   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:32.889888   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:33.131738   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:33.256259   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:33.389370   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:33.389421   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:33.631384   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:33.756547   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:33.889726   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:33.889937   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:34.131371   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:34.256522   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:34.389424   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:34.389530   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:34.631466   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:34.756399   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:34.889456   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:34.889513   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:34.903008   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:35.131076   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:35.256190   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:35.389100   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:35.389290   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:35.631098   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:35.756018   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:35.890327   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:35.890377   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:36.131285   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:36.256457   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:36.389402   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:36.389485   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:36.631254   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:36.756475   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:36.889676   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:36.889770   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:37.130594   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:37.255782   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:37.389667   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:37.389843   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:37.402314   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:37.630458   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:37.756709   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:37.889622   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:37.889751   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:38.131502   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:38.255754   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:38.389744   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:38.389929   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:38.631426   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:38.756664   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:38.889907   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:38.890022   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:39.131133   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:39.256407   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:39.389570   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:39.389754   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:39.403154   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:39.631540   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:39.755397   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:39.889446   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:39.889561   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:40.131439   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:40.256641   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:40.389595   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:40.389720   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:40.631576   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:40.755682   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:40.889908   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:40.889908   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:41.131837   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:41.255966   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:41.389827   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:41.390033   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:41.630811   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:41.756048   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:41.889998   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:41.890197   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:41.902733   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:42.130868   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:42.255912   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:42.389782   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:42.389921   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:42.630675   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:42.755718   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:42.889770   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:42.889947   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:43.130759   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:43.255894   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:43.389944   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:43.390117   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:43.630950   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:43.755876   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:43.890005   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:43.890166   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:43.902932   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:44.131252   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:44.256250   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:44.389173   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:44.389342   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:44.631208   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:44.756299   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:44.889232   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:44.889338   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:45.131039   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:45.255976   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:45.390040   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:45.390170   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:45.630847   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:45.755919   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:45.889907   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:45.890097   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:46.131214   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:46.256454   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:46.389498   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:46.389586   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:46.403135   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:46.631610   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:46.755561   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:46.889682   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:46.889791   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:47.131481   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:47.256989   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:47.390137   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:47.390188   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:47.631159   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:47.756189   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:47.889237   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:47.889412   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:48.131189   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:48.256545   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:48.389485   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:48.389676   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:48.631437   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:48.756417   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:48.889378   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:48.889432   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:48.903023   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:49.131472   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:49.256303   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:49.389395   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:49.389519   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:49.631344   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:49.756499   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:49.889681   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:49.889803   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:50.130318   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:50.256409   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:50.389433   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:50.389546   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:50.631350   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:50.756557   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:50.889507   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:50.889628   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:50.903164   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:51.131584   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:51.255358   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:51.389448   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:51.389543   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:51.631415   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:51.756631   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:51.889929   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:51.890188   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:52.130813   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:52.255966   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:52.390097   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:52.390235   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:52.631420   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:52.756344   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:52.889366   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:52.889597   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:53.131374   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:53.259179   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:53.389328   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:53.389381   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:53.402890   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:53.631437   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:53.756468   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:53.889712   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:53.889816   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:54.130595   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:54.255775   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:54.389841   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:54.389908   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:54.632087   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:54.755653   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:54.889760   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:54.889893   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:55.130625   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:55.255545   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:55.389901   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:55.389912   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:55.630669   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:55.755566   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:55.889473   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:55.889649   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:55.903340   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:56.130603   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:56.255498   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:56.389571   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:56.389854   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:56.631503   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:56.756556   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:56.890127   15847 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 07:56:56.890157   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:56.890164   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:56.902959   15847 node_ready.go:49] node "addons-959783" is "Ready"
	I1123 07:56:56.902988   15847 node_ready.go:38] duration metric: took 40.502328698s for node "addons-959783" to be "Ready" ...
	I1123 07:56:56.903005   15847 api_server.go:52] waiting for apiserver process to appear ...
	I1123 07:56:56.903055   15847 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 07:56:56.918739   15847 api_server.go:72] duration metric: took 41.073417085s to wait for apiserver process to appear ...
	I1123 07:56:56.918762   15847 api_server.go:88] waiting for apiserver healthz status ...
	I1123 07:56:56.918783   15847 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 07:56:56.923021   15847 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 07:56:56.923749   15847 api_server.go:141] control plane version: v1.34.1
	I1123 07:56:56.923769   15847 api_server.go:131] duration metric: took 5.000989ms to wait for apiserver health ...
	I1123 07:56:56.923778   15847 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 07:56:56.926828   15847 system_pods.go:59] 20 kube-system pods found
	I1123 07:56:56.926851   15847 system_pods.go:61] "amd-gpu-device-plugin-kcdzf" [e3f0739c-033b-404d-8651-715b88a2e213] Pending
	I1123 07:56:56.926870   15847 system_pods.go:61] "coredns-66bc5c9577-bzmrl" [062b2ef0-f93f-4022-b8f0-a63c7d823974] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:56:56.926876   15847 system_pods.go:61] "csi-hostpath-attacher-0" [a977a27e-c722-4201-a0f0-a0ca8bb5f495] Pending
	I1123 07:56:56.926885   15847 system_pods.go:61] "csi-hostpath-resizer-0" [ba1e1292-a73b-43ed-a6a8-e5c5cd69eaf0] Pending
	I1123 07:56:56.926890   15847 system_pods.go:61] "csi-hostpathplugin-8skb7" [078e8b91-1aff-4ba5-b419-3e99727fa05c] Pending
	I1123 07:56:56.926896   15847 system_pods.go:61] "etcd-addons-959783" [d270e85a-ae07-4e3e-883b-b6f83d9e85f1] Running
	I1123 07:56:56.926905   15847 system_pods.go:61] "kindnet-vqst5" [2384322c-daa2-40b5-9107-b18c55e3ce5a] Running
	I1123 07:56:56.926911   15847 system_pods.go:61] "kube-apiserver-addons-959783" [49ec5d7c-f7e6-4871-a3c4-ae8b16fcfa0c] Running
	I1123 07:56:56.926922   15847 system_pods.go:61] "kube-controller-manager-addons-959783" [26215941-0128-41c5-ae74-08552252b345] Running
	I1123 07:56:56.926934   15847 system_pods.go:61] "kube-ingress-dns-minikube" [8fc836c3-712f-4578-a86e-9e5f461a0e7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:56:56.926937   15847 system_pods.go:61] "kube-proxy-lrdk2" [0e382777-1804-494e-876d-80638a083b09] Running
	I1123 07:56:56.926943   15847 system_pods.go:61] "kube-scheduler-addons-959783" [d38e1eb6-419d-4b2c-b4ea-96259ab52844] Running
	I1123 07:56:56.926950   15847 system_pods.go:61] "metrics-server-85b7d694d7-87jkk" [71097df3-1b14-4559-b01c-7084f8d00b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:56:56.926953   15847 system_pods.go:61] "nvidia-device-plugin-daemonset-gft7l" [81c12107-652e-454a-9b52-5b44ffb4e5f9] Pending
	I1123 07:56:56.926958   15847 system_pods.go:61] "registry-6b586f9694-mq8bw" [e0c7828e-fc45-45aa-b3c4-89e8cad6740e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:56:56.926964   15847 system_pods.go:61] "registry-creds-764b6fb674-5nncl" [dbe053d8-8038-4931-a819-4d425afcb649] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:56:56.926969   15847 system_pods.go:61] "registry-proxy-txmj8" [61917d5c-8217-4b89-b9e1-02789e24dd18] Pending
	I1123 07:56:56.926972   15847 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q26tv" [17e5c1bb-2c63-43a1-96f2-192bedd89a52] Pending
	I1123 07:56:56.926981   15847 system_pods.go:61] "snapshot-controller-7d9fbc56b8-smbtq" [662fd35e-b22f-4ee7-ba9e-95c62fe7d1ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:56.926990   15847 system_pods.go:61] "storage-provisioner" [d02ed26c-3769-4e72-90f8-5ea46e43c143] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:56:56.926998   15847 system_pods.go:74] duration metric: took 3.213757ms to wait for pod list to return data ...
	I1123 07:56:56.927010   15847 default_sa.go:34] waiting for default service account to be created ...
	I1123 07:56:56.928716   15847 default_sa.go:45] found service account: "default"
	I1123 07:56:56.928730   15847 default_sa.go:55] duration metric: took 1.715359ms for default service account to be created ...
	I1123 07:56:56.928738   15847 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 07:56:56.931320   15847 system_pods.go:86] 20 kube-system pods found
	I1123 07:56:56.931344   15847 system_pods.go:89] "amd-gpu-device-plugin-kcdzf" [e3f0739c-033b-404d-8651-715b88a2e213] Pending
	I1123 07:56:56.931354   15847 system_pods.go:89] "coredns-66bc5c9577-bzmrl" [062b2ef0-f93f-4022-b8f0-a63c7d823974] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:56:56.931361   15847 system_pods.go:89] "csi-hostpath-attacher-0" [a977a27e-c722-4201-a0f0-a0ca8bb5f495] Pending
	I1123 07:56:56.931366   15847 system_pods.go:89] "csi-hostpath-resizer-0" [ba1e1292-a73b-43ed-a6a8-e5c5cd69eaf0] Pending
	I1123 07:56:56.931372   15847 system_pods.go:89] "csi-hostpathplugin-8skb7" [078e8b91-1aff-4ba5-b419-3e99727fa05c] Pending
	I1123 07:56:56.931377   15847 system_pods.go:89] "etcd-addons-959783" [d270e85a-ae07-4e3e-883b-b6f83d9e85f1] Running
	I1123 07:56:56.931382   15847 system_pods.go:89] "kindnet-vqst5" [2384322c-daa2-40b5-9107-b18c55e3ce5a] Running
	I1123 07:56:56.931387   15847 system_pods.go:89] "kube-apiserver-addons-959783" [49ec5d7c-f7e6-4871-a3c4-ae8b16fcfa0c] Running
	I1123 07:56:56.931392   15847 system_pods.go:89] "kube-controller-manager-addons-959783" [26215941-0128-41c5-ae74-08552252b345] Running
	I1123 07:56:56.931402   15847 system_pods.go:89] "kube-ingress-dns-minikube" [8fc836c3-712f-4578-a86e-9e5f461a0e7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:56:56.931407   15847 system_pods.go:89] "kube-proxy-lrdk2" [0e382777-1804-494e-876d-80638a083b09] Running
	I1123 07:56:56.931413   15847 system_pods.go:89] "kube-scheduler-addons-959783" [d38e1eb6-419d-4b2c-b4ea-96259ab52844] Running
	I1123 07:56:56.931420   15847 system_pods.go:89] "metrics-server-85b7d694d7-87jkk" [71097df3-1b14-4559-b01c-7084f8d00b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:56:56.931426   15847 system_pods.go:89] "nvidia-device-plugin-daemonset-gft7l" [81c12107-652e-454a-9b52-5b44ffb4e5f9] Pending
	I1123 07:56:56.931434   15847 system_pods.go:89] "registry-6b586f9694-mq8bw" [e0c7828e-fc45-45aa-b3c4-89e8cad6740e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:56:56.931442   15847 system_pods.go:89] "registry-creds-764b6fb674-5nncl" [dbe053d8-8038-4931-a819-4d425afcb649] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:56:56.931458   15847 system_pods.go:89] "registry-proxy-txmj8" [61917d5c-8217-4b89-b9e1-02789e24dd18] Pending
	I1123 07:56:56.931464   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q26tv" [17e5c1bb-2c63-43a1-96f2-192bedd89a52] Pending
	I1123 07:56:56.931471   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-smbtq" [662fd35e-b22f-4ee7-ba9e-95c62fe7d1ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:56.931487   15847 system_pods.go:89] "storage-provisioner" [d02ed26c-3769-4e72-90f8-5ea46e43c143] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:56:56.931504   15847 retry.go:31] will retry after 238.753536ms: missing components: kube-dns
	I1123 07:56:57.131351   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:57.234005   15847 system_pods.go:86] 20 kube-system pods found
	I1123 07:56:57.234042   15847 system_pods.go:89] "amd-gpu-device-plugin-kcdzf" [e3f0739c-033b-404d-8651-715b88a2e213] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 07:56:57.234054   15847 system_pods.go:89] "coredns-66bc5c9577-bzmrl" [062b2ef0-f93f-4022-b8f0-a63c7d823974] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:56:57.234062   15847 system_pods.go:89] "csi-hostpath-attacher-0" [a977a27e-c722-4201-a0f0-a0ca8bb5f495] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:56:57.234073   15847 system_pods.go:89] "csi-hostpath-resizer-0" [ba1e1292-a73b-43ed-a6a8-e5c5cd69eaf0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:56:57.234093   15847 system_pods.go:89] "csi-hostpathplugin-8skb7" [078e8b91-1aff-4ba5-b419-3e99727fa05c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:56:57.234100   15847 system_pods.go:89] "etcd-addons-959783" [d270e85a-ae07-4e3e-883b-b6f83d9e85f1] Running
	I1123 07:56:57.234107   15847 system_pods.go:89] "kindnet-vqst5" [2384322c-daa2-40b5-9107-b18c55e3ce5a] Running
	I1123 07:56:57.234113   15847 system_pods.go:89] "kube-apiserver-addons-959783" [49ec5d7c-f7e6-4871-a3c4-ae8b16fcfa0c] Running
	I1123 07:56:57.234119   15847 system_pods.go:89] "kube-controller-manager-addons-959783" [26215941-0128-41c5-ae74-08552252b345] Running
	I1123 07:56:57.234126   15847 system_pods.go:89] "kube-ingress-dns-minikube" [8fc836c3-712f-4578-a86e-9e5f461a0e7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:56:57.234132   15847 system_pods.go:89] "kube-proxy-lrdk2" [0e382777-1804-494e-876d-80638a083b09] Running
	I1123 07:56:57.234138   15847 system_pods.go:89] "kube-scheduler-addons-959783" [d38e1eb6-419d-4b2c-b4ea-96259ab52844] Running
	I1123 07:56:57.234152   15847 system_pods.go:89] "metrics-server-85b7d694d7-87jkk" [71097df3-1b14-4559-b01c-7084f8d00b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:56:57.234160   15847 system_pods.go:89] "nvidia-device-plugin-daemonset-gft7l" [81c12107-652e-454a-9b52-5b44ffb4e5f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:56:57.234171   15847 system_pods.go:89] "registry-6b586f9694-mq8bw" [e0c7828e-fc45-45aa-b3c4-89e8cad6740e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:56:57.234181   15847 system_pods.go:89] "registry-creds-764b6fb674-5nncl" [dbe053d8-8038-4931-a819-4d425afcb649] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:56:57.234190   15847 system_pods.go:89] "registry-proxy-txmj8" [61917d5c-8217-4b89-b9e1-02789e24dd18] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:56:57.234198   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q26tv" [17e5c1bb-2c63-43a1-96f2-192bedd89a52] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:57.234208   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-smbtq" [662fd35e-b22f-4ee7-ba9e-95c62fe7d1ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:57.234216   15847 system_pods.go:89] "storage-provisioner" [d02ed26c-3769-4e72-90f8-5ea46e43c143] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:56:57.234238   15847 retry.go:31] will retry after 314.436306ms: missing components: kube-dns
	I1123 07:56:57.321481   15847 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 07:56:57.321510   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:57.389459   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:57.389581   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:57.554093   15847 system_pods.go:86] 20 kube-system pods found
	I1123 07:56:57.554129   15847 system_pods.go:89] "amd-gpu-device-plugin-kcdzf" [e3f0739c-033b-404d-8651-715b88a2e213] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 07:56:57.554138   15847 system_pods.go:89] "coredns-66bc5c9577-bzmrl" [062b2ef0-f93f-4022-b8f0-a63c7d823974] Running
	I1123 07:56:57.554148   15847 system_pods.go:89] "csi-hostpath-attacher-0" [a977a27e-c722-4201-a0f0-a0ca8bb5f495] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:56:57.554159   15847 system_pods.go:89] "csi-hostpath-resizer-0" [ba1e1292-a73b-43ed-a6a8-e5c5cd69eaf0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:56:57.554171   15847 system_pods.go:89] "csi-hostpathplugin-8skb7" [078e8b91-1aff-4ba5-b419-3e99727fa05c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:56:57.554182   15847 system_pods.go:89] "etcd-addons-959783" [d270e85a-ae07-4e3e-883b-b6f83d9e85f1] Running
	I1123 07:56:57.554189   15847 system_pods.go:89] "kindnet-vqst5" [2384322c-daa2-40b5-9107-b18c55e3ce5a] Running
	I1123 07:56:57.554196   15847 system_pods.go:89] "kube-apiserver-addons-959783" [49ec5d7c-f7e6-4871-a3c4-ae8b16fcfa0c] Running
	I1123 07:56:57.554202   15847 system_pods.go:89] "kube-controller-manager-addons-959783" [26215941-0128-41c5-ae74-08552252b345] Running
	I1123 07:56:57.554216   15847 system_pods.go:89] "kube-ingress-dns-minikube" [8fc836c3-712f-4578-a86e-9e5f461a0e7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:56:57.554227   15847 system_pods.go:89] "kube-proxy-lrdk2" [0e382777-1804-494e-876d-80638a083b09] Running
	I1123 07:56:57.554233   15847 system_pods.go:89] "kube-scheduler-addons-959783" [d38e1eb6-419d-4b2c-b4ea-96259ab52844] Running
	I1123 07:56:57.554240   15847 system_pods.go:89] "metrics-server-85b7d694d7-87jkk" [71097df3-1b14-4559-b01c-7084f8d00b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:56:57.554248   15847 system_pods.go:89] "nvidia-device-plugin-daemonset-gft7l" [81c12107-652e-454a-9b52-5b44ffb4e5f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:56:57.554256   15847 system_pods.go:89] "registry-6b586f9694-mq8bw" [e0c7828e-fc45-45aa-b3c4-89e8cad6740e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:56:57.554264   15847 system_pods.go:89] "registry-creds-764b6fb674-5nncl" [dbe053d8-8038-4931-a819-4d425afcb649] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:56:57.554271   15847 system_pods.go:89] "registry-proxy-txmj8" [61917d5c-8217-4b89-b9e1-02789e24dd18] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:56:57.554281   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q26tv" [17e5c1bb-2c63-43a1-96f2-192bedd89a52] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:57.554294   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-smbtq" [662fd35e-b22f-4ee7-ba9e-95c62fe7d1ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:57.554300   15847 system_pods.go:89] "storage-provisioner" [d02ed26c-3769-4e72-90f8-5ea46e43c143] Running
	I1123 07:56:57.554315   15847 system_pods.go:126] duration metric: took 625.571024ms to wait for k8s-apps to be running ...
	I1123 07:56:57.554325   15847 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 07:56:57.554383   15847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 07:56:57.570040   15847 system_svc.go:56] duration metric: took 15.707782ms WaitForService to wait for kubelet
	I1123 07:56:57.570071   15847 kubeadm.go:587] duration metric: took 41.724751856s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 07:56:57.570092   15847 node_conditions.go:102] verifying NodePressure condition ...
	I1123 07:56:57.572825   15847 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 07:56:57.572854   15847 node_conditions.go:123] node cpu capacity is 8
	I1123 07:56:57.572872   15847 node_conditions.go:105] duration metric: took 2.773352ms to run NodePressure ...
	I1123 07:56:57.572886   15847 start.go:242] waiting for startup goroutines ...
	I1123 07:56:57.652823   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:57.756494   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:57.890178   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:57.890260   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:58.132187   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:58.257309   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:58.389676   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:58.389758   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:58.631367   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:58.757966   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:58.890054   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:58.890226   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:59.132867   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:59.257959   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:59.391788   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:59.391950   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:59.632122   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:59.757117   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:59.890763   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:59.890910   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:00.131379   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:00.257431   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:00.389892   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:00.390043   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:00.631263   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:00.756457   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:00.889580   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:00.889612   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:01.131363   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:01.257517   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:01.393314   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:01.393912   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:01.632166   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:01.758085   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:01.890434   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:01.890517   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:02.132525   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:02.256530   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:02.442187   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:02.442199   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:02.631768   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:02.756578   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:02.890363   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:02.890394   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:03.131274   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:03.256354   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:03.390484   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:03.390517   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:03.631122   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:03.756636   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:03.889951   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:03.890119   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:04.131930   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:04.257024   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:04.390811   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:04.390876   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:04.632289   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:04.763762   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:04.890672   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:04.890752   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:05.131454   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:05.257599   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:05.390363   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:05.390543   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:05.632295   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:05.757096   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:05.890842   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:05.890881   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:06.132043   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:06.258352   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:06.485525   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:06.485553   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:06.653802   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:06.756627   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:06.890271   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:06.890448   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:07.132137   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:07.256485   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:07.390720   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:07.390948   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:07.631145   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:07.757043   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:07.890390   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:07.890397   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:08.131108   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:08.256515   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:08.389710   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:08.389831   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:08.632670   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:08.756730   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:08.890732   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:08.891097   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:09.132280   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:09.257179   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:09.390855   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:09.390936   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:09.631833   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:09.756959   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:09.891097   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:09.891189   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:10.132255   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:10.257086   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:10.390501   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:10.390587   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:10.631044   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:10.756277   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:10.889276   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:10.889340   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:11.132161   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:11.257059   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:11.390719   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:11.390774   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:11.631047   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:11.757249   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:11.889977   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:11.890024   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:12.132146   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:12.257471   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:12.390818   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:12.390921   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:12.631812   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:12.756877   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:12.890622   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:12.890820   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:13.131323   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:13.257042   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:13.390580   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:13.390645   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:13.631712   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:13.756785   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:13.890133   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:13.890204   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:14.131611   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:14.256381   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:14.391421   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:14.403022   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:14.631290   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:14.757345   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:14.890026   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:14.890098   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:15.132252   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:15.256997   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:15.390198   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:15.390419   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:15.630977   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:15.756543   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:15.889833   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:15.889908   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:16.131803   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:16.256556   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:16.391586   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:16.391699   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:16.631334   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:16.756674   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:16.889853   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:16.889898   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:17.131287   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:17.256578   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:17.389834   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:17.390044   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:17.632102   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:17.757139   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:17.891476   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:17.891647   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:18.133218   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:18.258783   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:18.390924   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:18.390982   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:18.632260   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:18.757278   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:18.889908   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:18.889947   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:19.131218   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:19.256313   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:19.390226   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:19.390394   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:19.632748   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:19.756418   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:19.890229   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:19.890293   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:20.130992   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:20.256640   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:20.390055   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:20.390086   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:20.631882   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:20.756440   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:20.889588   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:20.889598   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:21.131625   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:21.256489   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:21.389896   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:21.389986   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:21.631355   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:21.758585   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:21.890472   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:21.891326   15847 kapi.go:107] duration metric: took 1m4.50426627s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 07:57:22.132003   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:22.257031   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:22.390164   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:22.632297   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:22.756972   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:22.890598   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:23.131091   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:23.256940   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:23.390169   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:23.631622   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:23.756421   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:23.890309   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:24.132413   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:24.257713   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:24.389969   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:24.631306   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:24.757192   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:24.890856   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:25.131336   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:25.257155   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:25.389643   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:25.631328   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:25.757889   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:25.890752   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:26.131521   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:26.257653   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:26.390431   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:26.632170   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:26.757134   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:26.890359   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:27.131770   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:27.256248   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:27.390407   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:27.631529   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:27.755553   15847 kapi.go:107] duration metric: took 1m10.002039328s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 07:57:27.889489   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:28.130787   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:28.390632   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:28.630765   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:28.890504   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:29.131283   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:29.392382   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:29.633393   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:29.891572   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:30.132540   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:30.390657   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:30.645374   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:30.890778   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:31.130839   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:31.390423   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:31.631438   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:31.890837   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:32.132095   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:32.389560   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:32.631088   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:32.889765   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:33.131899   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:33.390038   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:33.631806   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:33.890762   15847 kapi.go:107] duration metric: took 1m16.503693864s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 07:57:34.130776   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:34.631595   15847 kapi.go:107] duration metric: took 1m10.503147577s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 07:57:34.633065   15847 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-959783 cluster.
	I1123 07:57:34.634065   15847 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 07:57:34.635111   15847 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 07:57:34.636211   15847 out.go:179] * Enabled addons: cloud-spanner, inspektor-gadget, registry-creds, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, default-storageclass, nvidia-device-plugin, ingress-dns, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1123 07:57:34.637196   15847 addons.go:530] duration metric: took 1m18.791824473s for enable addons: enabled=[cloud-spanner inspektor-gadget registry-creds amd-gpu-device-plugin storage-provisioner metrics-server yakd default-storageclass nvidia-device-plugin ingress-dns volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1123 07:57:34.637235   15847 start.go:247] waiting for cluster config update ...
	I1123 07:57:34.637263   15847 start.go:256] writing updated cluster config ...
	I1123 07:57:34.637488   15847 ssh_runner.go:195] Run: rm -f paused
	I1123 07:57:34.641150   15847 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 07:57:34.643441   15847 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bzmrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.646809   15847 pod_ready.go:94] pod "coredns-66bc5c9577-bzmrl" is "Ready"
	I1123 07:57:34.646826   15847 pod_ready.go:86] duration metric: took 3.366862ms for pod "coredns-66bc5c9577-bzmrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.648463   15847 pod_ready.go:83] waiting for pod "etcd-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.651414   15847 pod_ready.go:94] pod "etcd-addons-959783" is "Ready"
	I1123 07:57:34.651429   15847 pod_ready.go:86] duration metric: took 2.949725ms for pod "etcd-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.652985   15847 pod_ready.go:83] waiting for pod "kube-apiserver-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.655831   15847 pod_ready.go:94] pod "kube-apiserver-addons-959783" is "Ready"
	I1123 07:57:34.655847   15847 pod_ready.go:86] duration metric: took 2.847621ms for pod "kube-apiserver-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.657321   15847 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:35.044544   15847 pod_ready.go:94] pod "kube-controller-manager-addons-959783" is "Ready"
	I1123 07:57:35.044570   15847 pod_ready.go:86] duration metric: took 387.233108ms for pod "kube-controller-manager-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:35.244177   15847 pod_ready.go:83] waiting for pod "kube-proxy-lrdk2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:35.644753   15847 pod_ready.go:94] pod "kube-proxy-lrdk2" is "Ready"
	I1123 07:57:35.644776   15847 pod_ready.go:86] duration metric: took 400.575033ms for pod "kube-proxy-lrdk2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:35.844182   15847 pod_ready.go:83] waiting for pod "kube-scheduler-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:36.245131   15847 pod_ready.go:94] pod "kube-scheduler-addons-959783" is "Ready"
	I1123 07:57:36.245155   15847 pod_ready.go:86] duration metric: took 400.948181ms for pod "kube-scheduler-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:36.245167   15847 pod_ready.go:40] duration metric: took 1.603994726s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 07:57:36.295158   15847 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 07:57:36.297249   15847 out.go:179] * Done! kubectl is now configured to use "addons-959783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 07:58:46 addons-959783 crio[773]: time="2025-11-23T07:58:46.80842628Z" level=info msg="Removing container: 523bf51ae64239f1588535e89cc3c9766cdeae20eec86ef673c821410fc06e91" id=3fbe3008-745b-4fd5-89d6-05548e10664e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 07:58:46 addons-959783 crio[773]: time="2025-11-23T07:58:46.816255752Z" level=info msg="Removed container 523bf51ae64239f1588535e89cc3c9766cdeae20eec86ef673c821410fc06e91: default/task-pv-pod-restore/task-pv-container" id=3fbe3008-745b-4fd5-89d6-05548e10664e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 07:59:10 addons-959783 crio[773]: time="2025-11-23T07:59:10.259474109Z" level=info msg="Stopping pod sandbox: e5ffc0e53c6beeacb726469aa07cf8d50b4ee3b2c07cd19ed5537c19d645c122" id=ff03617c-ab8c-4429-b3b9-08f53ee5438e name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 07:59:10 addons-959783 crio[773]: time="2025-11-23T07:59:10.259522591Z" level=info msg="Stopped pod sandbox (already stopped): e5ffc0e53c6beeacb726469aa07cf8d50b4ee3b2c07cd19ed5537c19d645c122" id=ff03617c-ab8c-4429-b3b9-08f53ee5438e name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 07:59:10 addons-959783 crio[773]: time="2025-11-23T07:59:10.259788602Z" level=info msg="Removing pod sandbox: e5ffc0e53c6beeacb726469aa07cf8d50b4ee3b2c07cd19ed5537c19d645c122" id=ad264d8e-62cf-40dc-b37e-787495933ba1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 07:59:10 addons-959783 crio[773]: time="2025-11-23T07:59:10.262845112Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 07:59:10 addons-959783 crio[773]: time="2025-11-23T07:59:10.262896372Z" level=info msg="Removed pod sandbox: e5ffc0e53c6beeacb726469aa07cf8d50b4ee3b2c07cd19ed5537c19d645c122" id=ad264d8e-62cf-40dc-b37e-787495933ba1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 07:59:10 addons-959783 crio[773]: time="2025-11-23T07:59:10.263173052Z" level=info msg="Stopping pod sandbox: 196fe9cb8ef7b005354455d169055ae9cdebdd1a2d8899af916a142dff09475f" id=8a12a3ae-74b4-4bdc-a6f6-e1b101f807a0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 07:59:10 addons-959783 crio[773]: time="2025-11-23T07:59:10.263208089Z" level=info msg="Stopped pod sandbox (already stopped): 196fe9cb8ef7b005354455d169055ae9cdebdd1a2d8899af916a142dff09475f" id=8a12a3ae-74b4-4bdc-a6f6-e1b101f807a0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 07:59:10 addons-959783 crio[773]: time="2025-11-23T07:59:10.263480108Z" level=info msg="Removing pod sandbox: 196fe9cb8ef7b005354455d169055ae9cdebdd1a2d8899af916a142dff09475f" id=aba613a8-a87c-4cb9-88c5-40567a17b3a2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 07:59:10 addons-959783 crio[773]: time="2025-11-23T07:59:10.266590147Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 07:59:10 addons-959783 crio[773]: time="2025-11-23T07:59:10.266636476Z" level=info msg="Removed pod sandbox: 196fe9cb8ef7b005354455d169055ae9cdebdd1a2d8899af916a142dff09475f" id=aba613a8-a87c-4cb9-88c5-40567a17b3a2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.470651287Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-zr2pw/POD" id=f8b040dc-56af-4a53-b5e3-9f420d8ebcdd name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.47075369Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.477078253Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zr2pw Namespace:default ID:9e674c39372e6907c321ba205211580b9c4630c09f2c96b3615d4a9b151d7fbc UID:a7fc0217-1b2c-48c4-8537-8038558856d1 NetNS:/var/run/netns/5f74d89b-fab4-45dc-91f5-41b4f2dc5021 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000888e58}] Aliases:map[]}"
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.477106728Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-zr2pw to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.486521102Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-zr2pw Namespace:default ID:9e674c39372e6907c321ba205211580b9c4630c09f2c96b3615d4a9b151d7fbc UID:a7fc0217-1b2c-48c4-8537-8038558856d1 NetNS:/var/run/netns/5f74d89b-fab4-45dc-91f5-41b4f2dc5021 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000888e58}] Aliases:map[]}"
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.486683262Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-zr2pw for CNI network kindnet (type=ptp)"
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.487482268Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.488412161Z" level=info msg="Ran pod sandbox 9e674c39372e6907c321ba205211580b9c4630c09f2c96b3615d4a9b151d7fbc with infra container: default/hello-world-app-5d498dc89-zr2pw/POD" id=f8b040dc-56af-4a53-b5e3-9f420d8ebcdd name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.489756352Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=702029f9-5e3c-408b-aea1-d7101ea2d402 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.489880909Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=702029f9-5e3c-408b-aea1-d7101ea2d402 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.489917972Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=702029f9-5e3c-408b-aea1-d7101ea2d402 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.490506826Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=05d14490-1906-4ca6-a9b0-f5254ac40b6a name=/runtime.v1.ImageService/PullImage
	Nov 23 08:00:23 addons-959783 crio[773]: time="2025-11-23T08:00:23.494454733Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	ad1dfd5356782       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago       Running             registry-creds                           0                   5ec9b3dd3f57d       registry-creds-764b6fb674-5nncl            kube-system
	da677a70260d0       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago       Running             nginx                                    0                   dc64cba43ed83       nginx                                      default
	15eb9a2438d49       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago       Running             busybox                                  0                   b26ae315b68c4       busybox                                    default
	67e62a3782dbd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago       Running             gcp-auth                                 0                   7d78df10b5bd1       gcp-auth-78565c9fb4-5cjfg                  gcp-auth
	ea37a4f6d1d21       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago       Running             controller                               0                   566ae29283c7a       ingress-nginx-controller-6c8bf45fb-k5rdb   ingress-nginx
	4f7e9034a78fd       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             2 minutes ago       Exited              patch                                    2                   6e0a3fc9daa88       ingress-nginx-admission-patch-zf9fd        ingress-nginx
	444c2f1efdf59       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago       Running             csi-snapshotter                          0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                   kube-system
	87f89d7ecfbd2       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago       Running             csi-provisioner                          0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                   kube-system
	85507fd988591       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago       Running             liveness-probe                           0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                   kube-system
	de6c01c726f84       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                   kube-system
	e547e8711c1e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            3 minutes ago       Running             gadget                                   0                   ae5ee4fcabc57       gadget-jsjqv                               gadget
	d835cb4f7791d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                   kube-system
	8cc8cf367fdab       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago       Running             registry-proxy                           0                   7dc4e51da0ccd       registry-proxy-txmj8                       kube-system
	ef34710dda6e2       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   ec1aab8a3918e       nvidia-device-plugin-daemonset-gft7l       kube-system
	3b156f686aa9b       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago       Running             amd-gpu-device-plugin                    0                   a8afc61ab0fdb       amd-gpu-device-plugin-kcdzf                kube-system
	40e801de9fbe0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                   kube-system
	fc199ed96e024       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   e35848eff9303       snapshot-controller-7d9fbc56b8-q26tv       kube-system
	ca4ffed1fb95f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago       Exited              create                                   0                   8acbc451cfc3a       ingress-nginx-admission-create-cjxlj       ingress-nginx
	01ca798c81384       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   5801501067e61       snapshot-controller-7d9fbc56b8-smbtq       kube-system
	1765f478745cd       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   dd5990dbd8d58       csi-hostpath-resizer-0                     kube-system
	3ab2ba7b9cdf5       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago       Running             local-path-provisioner                   0                   ae84910c291a0       local-path-provisioner-648f6765c9-tznjx    local-path-storage
	9e8da838b979e       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago       Running             yakd                                     0                   5df602ae31fb1       yakd-dashboard-5ff678cb9-tx6dk             yakd-dashboard
	651380a78efa5       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago       Running             csi-attacher                             0                   a89ce5b4c9ee0       csi-hostpath-attacher-0                    kube-system
	3a1961ad35159       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago       Running             registry                                 0                   fbb5cc422ecf5       registry-6b586f9694-mq8bw                  kube-system
	8fb1acf83526f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago       Running             minikube-ingress-dns                     0                   2ec6a6b0d0f7e       kube-ingress-dns-minikube                  kube-system
	c38e4541e0dba       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago       Running             cloud-spanner-emulator                   0                   b29330a812a18       cloud-spanner-emulator-5bdddb765-sfxnv     default
	dcae7b911caf9       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago       Running             metrics-server                           0                   9fa02050ac2df       metrics-server-85b7d694d7-87jkk            kube-system
	3d968d545ec05       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago       Running             coredns                                  0                   0270667f50a9e       coredns-66bc5c9577-bzmrl                   kube-system
	8571ca641d958       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago       Running             storage-provisioner                      0                   973ebb9d087e5       storage-provisioner                        kube-system
	792f2602e690a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago       Running             kube-proxy                               0                   fe52c087cd5f6       kube-proxy-lrdk2                           kube-system
	adf924f9387e3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago       Running             kindnet-cni                              0                   62a55cded7e42       kindnet-vqst5                              kube-system
	6e081d40a1a88       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago       Running             kube-apiserver                           0                   4590791df805c       kube-apiserver-addons-959783               kube-system
	e05878f9bf96b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago       Running             kube-controller-manager                  0                   3957dc6faa309       kube-controller-manager-addons-959783      kube-system
	9e76f19262eee       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago       Running             kube-scheduler                           0                   06450e87ee6a2       kube-scheduler-addons-959783               kube-system
	f6524b0b95cff       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago       Running             etcd                                     0                   3d1b46a0c25e6       etcd-addons-959783                         kube-system
	
	
	==> coredns [3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b] <==
	[INFO] 10.244.0.22:35950 - 59335 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133151s
	[INFO] 10.244.0.22:46525 - 43799 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.006194202s
	[INFO] 10.244.0.22:35559 - 64489 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.006330659s
	[INFO] 10.244.0.22:45875 - 11624 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006302281s
	[INFO] 10.244.0.22:40923 - 45890 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006529417s
	[INFO] 10.244.0.22:58136 - 10529 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004091138s
	[INFO] 10.244.0.22:39074 - 2465 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006063062s
	[INFO] 10.244.0.22:35645 - 24213 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001928691s
	[INFO] 10.244.0.22:36683 - 49159 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002400061s
	[INFO] 10.244.0.27:53472 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000243119s
	[INFO] 10.244.0.27:42339 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158377s
	[INFO] 10.244.0.29:46943 - 31057 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000209598s
	[INFO] 10.244.0.29:49187 - 43239 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000180676s
	[INFO] 10.244.0.29:45692 - 28277 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000107778s
	[INFO] 10.244.0.29:50333 - 34848 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00013623s
	[INFO] 10.244.0.29:57948 - 37341 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000112092s
	[INFO] 10.244.0.29:60263 - 12892 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00014823s
	[INFO] 10.244.0.29:53727 - 15151 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004379567s
	[INFO] 10.244.0.29:47898 - 52602 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004708707s
	[INFO] 10.244.0.29:36385 - 52478 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.003822099s
	[INFO] 10.244.0.29:41790 - 63189 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004338967s
	[INFO] 10.244.0.29:43750 - 22428 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004543793s
	[INFO] 10.244.0.29:54249 - 9259 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005032145s
	[INFO] 10.244.0.29:43745 - 42059 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001703333s
	[INFO] 10.244.0.29:36758 - 43227 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001792001s
	
	
	==> describe nodes <==
	Name:               addons-959783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-959783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=addons-959783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T07_56_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-959783
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-959783"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 07:56:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-959783
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:00:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:00:05 +0000   Sun, 23 Nov 2025 07:56:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:00:05 +0000   Sun, 23 Nov 2025 07:56:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:00:05 +0000   Sun, 23 Nov 2025 07:56:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:00:05 +0000   Sun, 23 Nov 2025 07:56:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-959783
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                7975106a-c6f5-487f-a4ee-660505127c74
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     cloud-spanner-emulator-5bdddb765-sfxnv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  default                     hello-world-app-5d498dc89-zr2pw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-jsjqv                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  gcp-auth                    gcp-auth-78565c9fb4-5cjfg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-k5rdb    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m7s
	  kube-system                 amd-gpu-device-plugin-kcdzf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 coredns-66bc5c9577-bzmrl                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m8s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 csi-hostpathplugin-8skb7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 etcd-addons-959783                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m14s
	  kube-system                 kindnet-vqst5                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m10s
	  kube-system                 kube-apiserver-addons-959783                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-controller-manager-addons-959783       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-lrdk2                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-scheduler-addons-959783                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 metrics-server-85b7d694d7-87jkk             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m7s
	  kube-system                 nvidia-device-plugin-daemonset-gft7l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 registry-6b586f9694-mq8bw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 registry-creds-764b6fb674-5nncl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 registry-proxy-txmj8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 snapshot-controller-7d9fbc56b8-q26tv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 snapshot-controller-7d9fbc56b8-smbtq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  local-path-storage          local-path-provisioner-648f6765c9-tznjx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-tx6dk              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  Starting                 4m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node addons-959783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node addons-959783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x8 over 4m19s)  kubelet          Node addons-959783 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s                  kubelet          Node addons-959783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s                  kubelet          Node addons-959783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s                  kubelet          Node addons-959783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m10s                  node-controller  Node addons-959783 event: Registered Node addons-959783 in Controller
	  Normal  NodeReady                3m28s                  kubelet          Node addons-959783 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.079858] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024030] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.151122] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 07:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.034290] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +2.047767] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +4.031598] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +8.127154] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[ +16.382339] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[Nov23 07:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	
	
	==> etcd [f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea] <==
	{"level":"warn","ts":"2025-11-23T07:56:07.282195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.287581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.296434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.305204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.310605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.317793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.323489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.329914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.335580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.342967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.348835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.355089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.362075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.369216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.376269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.400809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.406371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.411982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.459248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:18.133526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:44.844784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:44.850457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:44.874148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56060","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T07:57:06.483825Z","caller":"traceutil/trace.go:172","msg":"trace[1385742835] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"120.000658ms","start":"2025-11-23T07:57:06.363806Z","end":"2025-11-23T07:57:06.483807Z","steps":["trace[1385742835] 'process raft request'  (duration: 119.841063ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T07:57:30.644401Z","caller":"traceutil/trace.go:172","msg":"trace[1414903627] transaction","detail":"{read_only:false; response_revision:1206; number_of_response:1; }","duration":"136.191247ms","start":"2025-11-23T07:57:30.508195Z","end":"2025-11-23T07:57:30.644386Z","steps":["trace[1414903627] 'process raft request'  (duration: 136.099241ms)"],"step_count":1}
	
	
	==> gcp-auth [67e62a3782dbdac8d36f038c0536bbe3746fe321e6bd7ea94fa66be1e8722d40] <==
	2025/11/23 07:57:33 GCP Auth Webhook started!
	2025/11/23 07:57:36 Ready to marshal response ...
	2025/11/23 07:57:36 Ready to write response ...
	2025/11/23 07:57:36 Ready to marshal response ...
	2025/11/23 07:57:36 Ready to write response ...
	2025/11/23 07:57:36 Ready to marshal response ...
	2025/11/23 07:57:36 Ready to write response ...
	2025/11/23 07:57:44 Ready to marshal response ...
	2025/11/23 07:57:44 Ready to write response ...
	2025/11/23 07:57:44 Ready to marshal response ...
	2025/11/23 07:57:44 Ready to write response ...
	2025/11/23 07:57:54 Ready to marshal response ...
	2025/11/23 07:57:54 Ready to write response ...
	2025/11/23 07:57:55 Ready to marshal response ...
	2025/11/23 07:57:55 Ready to write response ...
	2025/11/23 07:57:59 Ready to marshal response ...
	2025/11/23 07:57:59 Ready to write response ...
	2025/11/23 07:58:13 Ready to marshal response ...
	2025/11/23 07:58:13 Ready to write response ...
	2025/11/23 07:58:39 Ready to marshal response ...
	2025/11/23 07:58:39 Ready to write response ...
	2025/11/23 08:00:23 Ready to marshal response ...
	2025/11/23 08:00:23 Ready to write response ...
	
	
	==> kernel <==
	 08:00:24 up 42 min,  0 user,  load average: 0.36, 0.61, 0.31
	Linux addons-959783 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed] <==
	I1123 07:58:16.500857       1 main.go:301] handling current node
	I1123 07:58:26.420819       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:58:26.420866       1 main.go:301] handling current node
	I1123 07:58:36.420857       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:58:36.420901       1 main.go:301] handling current node
	I1123 07:58:46.420666       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:58:46.420717       1 main.go:301] handling current node
	I1123 07:58:56.425029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:58:56.425068       1 main.go:301] handling current node
	I1123 07:59:06.420196       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:59:06.420223       1 main.go:301] handling current node
	I1123 07:59:16.420592       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:59:16.420624       1 main.go:301] handling current node
	I1123 07:59:26.420862       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:59:26.420890       1 main.go:301] handling current node
	I1123 07:59:36.428973       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:59:36.429003       1 main.go:301] handling current node
	I1123 07:59:46.429051       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:59:46.429078       1 main.go:301] handling current node
	I1123 07:59:56.420856       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:59:56.420883       1 main.go:301] handling current node
	I1123 08:00:06.420348       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:00:06.420375       1 main.go:301] handling current node
	I1123 08:00:16.420345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:00:16.420370       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527] <==
	 > logger="UnhandledError"
	E1123 07:57:09.354744       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.157.143:443: connect: connection refused" logger="UnhandledError"
	E1123 07:57:09.356266       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.157.143:443: connect: connection refused" logger="UnhandledError"
	W1123 07:57:10.354926       1 handler_proxy.go:99] no RequestInfo found in the context
	W1123 07:57:10.354926       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 07:57:10.355002       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1123 07:57:10.355016       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1123 07:57:10.355015       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1123 07:57:10.356125       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1123 07:57:14.365384       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 07:57:14.365431       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 07:57:14.365498       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1123 07:57:14.372830       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1123 07:57:43.921263       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34954: use of closed network connection
	E1123 07:57:44.057487       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34974: use of closed network connection
	I1123 07:57:59.528095       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1123 07:57:59.707277       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.137.129"}
	I1123 07:58:22.423754       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1123 08:00:23.232287       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.102.130"}
	
	
	==> kube-controller-manager [e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779] <==
	I1123 07:56:14.831855       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 07:56:14.831837       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 07:56:14.831894       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 07:56:14.832044       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 07:56:14.832045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 07:56:14.832046       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 07:56:14.832131       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 07:56:14.832334       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 07:56:14.832334       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 07:56:14.832449       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 07:56:14.833218       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 07:56:14.833227       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 07:56:14.833329       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 07:56:14.833678       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 07:56:14.835881       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 07:56:14.836016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 07:56:14.850273       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1123 07:56:44.839841       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 07:56:44.839954       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 07:56:44.840003       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 07:56:44.859009       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1123 07:56:44.862152       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 07:56:44.940697       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 07:56:44.962876       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 07:56:59.787014       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687] <==
	I1123 07:56:15.965506       1 server_linux.go:53] "Using iptables proxy"
	I1123 07:56:16.163839       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 07:56:16.265050       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 07:56:16.268811       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 07:56:16.272027       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 07:56:16.521615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 07:56:16.523741       1 server_linux.go:132] "Using iptables Proxier"
	I1123 07:56:16.698348       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 07:56:16.716566       1 server.go:527] "Version info" version="v1.34.1"
	I1123 07:56:16.716908       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 07:56:16.718799       1 config.go:200] "Starting service config controller"
	I1123 07:56:16.718811       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 07:56:16.719216       1 config.go:106] "Starting endpoint slice config controller"
	I1123 07:56:16.719228       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 07:56:16.719244       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 07:56:16.719249       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 07:56:16.719469       1 config.go:309] "Starting node config controller"
	I1123 07:56:16.719477       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 07:56:16.719484       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 07:56:16.819564       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 07:56:16.824755       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 07:56:16.824808       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8] <==
	E1123 07:56:07.842375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 07:56:07.842421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 07:56:07.842487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 07:56:07.842513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 07:56:07.842562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 07:56:07.842552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 07:56:07.842633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 07:56:07.842673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 07:56:07.842704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 07:56:07.842721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 07:56:07.842735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 07:56:07.842774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 07:56:07.842796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 07:56:07.842803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 07:56:07.842852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 07:56:07.842886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 07:56:08.715648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 07:56:08.727450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 07:56:08.767422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 07:56:08.813247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 07:56:08.815064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 07:56:08.881431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 07:56:08.908345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 07:56:08.978788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1123 07:56:10.839632       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.471407    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfrqw\" (UniqueName: \"kubernetes.io/projected/42e2985b-8042-48f9-b0e1-2cdf03a786e0-kube-api-access-lfrqw\") pod \"42e2985b-8042-48f9-b0e1-2cdf03a786e0\" (UID: \"42e2985b-8042-48f9-b0e1-2cdf03a786e0\") "
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.471433    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/42e2985b-8042-48f9-b0e1-2cdf03a786e0-gcp-creds\") pod \"42e2985b-8042-48f9-b0e1-2cdf03a786e0\" (UID: \"42e2985b-8042-48f9-b0e1-2cdf03a786e0\") "
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.471606    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e2985b-8042-48f9-b0e1-2cdf03a786e0-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "42e2985b-8042-48f9-b0e1-2cdf03a786e0" (UID: "42e2985b-8042-48f9-b0e1-2cdf03a786e0"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.473922    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e2985b-8042-48f9-b0e1-2cdf03a786e0-kube-api-access-lfrqw" (OuterVolumeSpecName: "kube-api-access-lfrqw") pod "42e2985b-8042-48f9-b0e1-2cdf03a786e0" (UID: "42e2985b-8042-48f9-b0e1-2cdf03a786e0"). InnerVolumeSpecName "kube-api-access-lfrqw". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.474352    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^3825eec5-c842-11f0-b10e-fe24522f00c0" (OuterVolumeSpecName: "task-pv-storage") pod "42e2985b-8042-48f9-b0e1-2cdf03a786e0" (UID: "42e2985b-8042-48f9-b0e1-2cdf03a786e0"). InnerVolumeSpecName "pvc-c8fb010e-e968-4981-84b1-a3bcc5c00994". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.572410    1301 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lfrqw\" (UniqueName: \"kubernetes.io/projected/42e2985b-8042-48f9-b0e1-2cdf03a786e0-kube-api-access-lfrqw\") on node \"addons-959783\" DevicePath \"\""
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.572438    1301 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/42e2985b-8042-48f9-b0e1-2cdf03a786e0-gcp-creds\") on node \"addons-959783\" DevicePath \"\""
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.572476    1301 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-c8fb010e-e968-4981-84b1-a3bcc5c00994\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3825eec5-c842-11f0-b10e-fe24522f00c0\") on node \"addons-959783\" "
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.577195    1301 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-c8fb010e-e968-4981-84b1-a3bcc5c00994" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^3825eec5-c842-11f0-b10e-fe24522f00c0") on node "addons-959783"
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.672915    1301 reconciler_common.go:299] "Volume detached for volume \"pvc-c8fb010e-e968-4981-84b1-a3bcc5c00994\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3825eec5-c842-11f0-b10e-fe24522f00c0\") on node \"addons-959783\" DevicePath \"\""
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.807145    1301 scope.go:117] "RemoveContainer" containerID="523bf51ae64239f1588535e89cc3c9766cdeae20eec86ef673c821410fc06e91"
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.816457    1301 scope.go:117] "RemoveContainer" containerID="523bf51ae64239f1588535e89cc3c9766cdeae20eec86ef673c821410fc06e91"
	Nov 23 07:58:46 addons-959783 kubelet[1301]: E1123 07:58:46.816782    1301 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"523bf51ae64239f1588535e89cc3c9766cdeae20eec86ef673c821410fc06e91\": container with ID starting with 523bf51ae64239f1588535e89cc3c9766cdeae20eec86ef673c821410fc06e91 not found: ID does not exist" containerID="523bf51ae64239f1588535e89cc3c9766cdeae20eec86ef673c821410fc06e91"
	Nov 23 07:58:46 addons-959783 kubelet[1301]: I1123 07:58:46.816831    1301 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"523bf51ae64239f1588535e89cc3c9766cdeae20eec86ef673c821410fc06e91"} err="failed to get container status \"523bf51ae64239f1588535e89cc3c9766cdeae20eec86ef673c821410fc06e91\": rpc error: code = NotFound desc = could not find container \"523bf51ae64239f1588535e89cc3c9766cdeae20eec86ef673c821410fc06e91\": container with ID starting with 523bf51ae64239f1588535e89cc3c9766cdeae20eec86ef673c821410fc06e91 not found: ID does not exist"
	Nov 23 07:58:48 addons-959783 kubelet[1301]: I1123 07:58:48.211096    1301 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e2985b-8042-48f9-b0e1-2cdf03a786e0" path="/var/lib/kubelet/pods/42e2985b-8042-48f9-b0e1-2cdf03a786e0/volumes"
	Nov 23 07:58:50 addons-959783 kubelet[1301]: I1123 07:58:50.211657    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gft7l" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 07:58:55 addons-959783 kubelet[1301]: I1123 07:58:55.535603    1301 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0c9dd3a2-82d7-42cc-89d4-a16a3a7d0aa3\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^28957364-c842-11f0-b10e-fe24522f00c0\") on node \"addons-959783\" "
	Nov 23 07:58:55 addons-959783 kubelet[1301]: E1123 07:58:55.539648    1301 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^28957364-c842-11f0-b10e-fe24522f00c0 podName: nodeName:}" failed. No retries permitted until 2025-11-23 07:59:27.53962907 +0000 UTC m=+197.404096074 (durationBeforeRetry 32s). Error: UnmountDevice failed for volume "pvc-0c9dd3a2-82d7-42cc-89d4-a16a3a7d0aa3" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^28957364-c842-11f0-b10e-fe24522f00c0") on node "addons-959783" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 28957364-c842-11f0-b10e-fe24522f00c0 does not exist in the volumes list
	Nov 23 07:59:27 addons-959783 kubelet[1301]: I1123 07:59:27.555847    1301 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0c9dd3a2-82d7-42cc-89d4-a16a3a7d0aa3\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^28957364-c842-11f0-b10e-fe24522f00c0\") on node \"addons-959783\" "
	Nov 23 07:59:27 addons-959783 kubelet[1301]: E1123 07:59:27.560495    1301 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^28957364-c842-11f0-b10e-fe24522f00c0 podName: nodeName:}" failed. No retries permitted until 2025-11-23 08:00:31.560478956 +0000 UTC m=+261.424945960 (durationBeforeRetry 1m4s). Error: UnmountDevice failed for volume "pvc-0c9dd3a2-82d7-42cc-89d4-a16a3a7d0aa3" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^28957364-c842-11f0-b10e-fe24522f00c0") on node "addons-959783" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 28957364-c842-11f0-b10e-fe24522f00c0 does not exist in the volumes list
	Nov 23 07:59:51 addons-959783 kubelet[1301]: I1123 07:59:51.208653    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gft7l" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 07:59:58 addons-959783 kubelet[1301]: I1123 07:59:58.209075    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-txmj8" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:00:00 addons-959783 kubelet[1301]: I1123 08:00:00.209967    1301 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-kcdzf" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:00:23 addons-959783 kubelet[1301]: I1123 08:00:23.322402    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7d4v\" (UniqueName: \"kubernetes.io/projected/a7fc0217-1b2c-48c4-8537-8038558856d1-kube-api-access-g7d4v\") pod \"hello-world-app-5d498dc89-zr2pw\" (UID: \"a7fc0217-1b2c-48c4-8537-8038558856d1\") " pod="default/hello-world-app-5d498dc89-zr2pw"
	Nov 23 08:00:23 addons-959783 kubelet[1301]: I1123 08:00:23.322462    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a7fc0217-1b2c-48c4-8537-8038558856d1-gcp-creds\") pod \"hello-world-app-5d498dc89-zr2pw\" (UID: \"a7fc0217-1b2c-48c4-8537-8038558856d1\") " pod="default/hello-world-app-5d498dc89-zr2pw"
	
	
	==> storage-provisioner [8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8] <==
	W1123 07:59:59.964675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:01.966986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:01.970365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:03.972582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:03.976825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:05.979022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:05.983170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:07.985938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:07.989138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:09.992022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:09.996199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:11.998525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:12.001597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:14.004197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:14.007821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:16.010301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:16.014582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:18.017520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:18.020935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:20.023505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:20.026882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:22.029124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:22.032369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:24.034821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:00:24.038729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-959783 -n addons-959783
helpers_test.go:269: (dbg) Run:  kubectl --context addons-959783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-cjxlj ingress-nginx-admission-patch-zf9fd
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-959783 describe pod ingress-nginx-admission-create-cjxlj ingress-nginx-admission-patch-zf9fd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-959783 describe pod ingress-nginx-admission-create-cjxlj ingress-nginx-admission-patch-zf9fd: exit status 1 (52.737308ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cjxlj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zf9fd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-959783 describe pod ingress-nginx-admission-create-cjxlj ingress-nginx-admission-patch-zf9fd: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (233.469398ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:00:25.497068   30410 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:00:25.497209   30410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:00:25.497219   30410 out.go:374] Setting ErrFile to fd 2...
	I1123 08:00:25.497223   30410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:00:25.497438   30410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:00:25.497654   30410 mustload.go:66] Loading cluster: addons-959783
	I1123 08:00:25.497961   30410 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:00:25.497975   30410 addons.go:622] checking whether the cluster is paused
	I1123 08:00:25.498055   30410 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:00:25.498066   30410 host.go:66] Checking if "addons-959783" exists ...
	I1123 08:00:25.498460   30410 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 08:00:25.515330   30410 ssh_runner.go:195] Run: systemctl --version
	I1123 08:00:25.515380   30410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 08:00:25.531349   30410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 08:00:25.629375   30410 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:00:25.629447   30410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:00:25.656881   30410 cri.go:89] found id: "ad1dfd5356782ae1a3eab35c55a8babfe8788ac17891691075fe655d8b74199b"
	I1123 08:00:25.656901   30410 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 08:00:25.656907   30410 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 08:00:25.656912   30410 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 08:00:25.656916   30410 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 08:00:25.656921   30410 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 08:00:25.656925   30410 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 08:00:25.656930   30410 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 08:00:25.656934   30410 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 08:00:25.656948   30410 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 08:00:25.656956   30410 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 08:00:25.656961   30410 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 08:00:25.656969   30410 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 08:00:25.656975   30410 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 08:00:25.656979   30410 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 08:00:25.656990   30410 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 08:00:25.656995   30410 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 08:00:25.657001   30410 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 08:00:25.657005   30410 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 08:00:25.657011   30410 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 08:00:25.657022   30410 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 08:00:25.657027   30410 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 08:00:25.657032   30410 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 08:00:25.657042   30410 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 08:00:25.657046   30410 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 08:00:25.657051   30410 cri.go:89] found id: ""
	I1123 08:00:25.657120   30410 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:00:25.669866   30410 out.go:203] 
	W1123 08:00:25.670879   30410 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:00:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:00:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:00:25.670904   30410 out.go:285] * 
	* 
	W1123 08:00:25.673892   30410 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:00:25.675031   30410 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable ingress --alsologtostderr -v=1: exit status 11 (233.177686ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:00:25.731214   30473 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:00:25.731352   30473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:00:25.731361   30473 out.go:374] Setting ErrFile to fd 2...
	I1123 08:00:25.731364   30473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:00:25.731564   30473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:00:25.731880   30473 mustload.go:66] Loading cluster: addons-959783
	I1123 08:00:25.732324   30473 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:00:25.732343   30473 addons.go:622] checking whether the cluster is paused
	I1123 08:00:25.732468   30473 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:00:25.732484   30473 host.go:66] Checking if "addons-959783" exists ...
	I1123 08:00:25.732970   30473 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 08:00:25.749841   30473 ssh_runner.go:195] Run: systemctl --version
	I1123 08:00:25.749882   30473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 08:00:25.766127   30473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 08:00:25.864568   30473 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:00:25.864664   30473 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:00:25.891016   30473 cri.go:89] found id: "ad1dfd5356782ae1a3eab35c55a8babfe8788ac17891691075fe655d8b74199b"
	I1123 08:00:25.891034   30473 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 08:00:25.891039   30473 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 08:00:25.891042   30473 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 08:00:25.891045   30473 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 08:00:25.891049   30473 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 08:00:25.891052   30473 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 08:00:25.891054   30473 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 08:00:25.891058   30473 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 08:00:25.891072   30473 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 08:00:25.891077   30473 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 08:00:25.891082   30473 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 08:00:25.891086   30473 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 08:00:25.891091   30473 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 08:00:25.891096   30473 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 08:00:25.891103   30473 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 08:00:25.891112   30473 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 08:00:25.891118   30473 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 08:00:25.891122   30473 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 08:00:25.891127   30473 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 08:00:25.891132   30473 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 08:00:25.891135   30473 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 08:00:25.891138   30473 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 08:00:25.891140   30473 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 08:00:25.891155   30473 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 08:00:25.891164   30473 cri.go:89] found id: ""
	I1123 08:00:25.891205   30473 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:00:25.903929   30473 out.go:203] 
	W1123 08:00:25.904908   30473 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:00:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:00:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:00:25.904921   30473 out.go:285] * 
	* 
	W1123 08:00:25.907870   30473 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:00:25.908873   30473 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.63s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jsjqv" [166fd2cd-0edd-4bb6-ba49-54191b5e70df] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002341777s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (245.239041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:57:56.614900   24898 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:57:56.615193   24898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:56.615200   24898 out.go:374] Setting ErrFile to fd 2...
	I1123 07:57:56.615205   24898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:56.616157   24898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:57:56.616587   24898 mustload.go:66] Loading cluster: addons-959783
	I1123 07:57:56.617615   24898 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:56.617641   24898 addons.go:622] checking whether the cluster is paused
	I1123 07:57:56.617803   24898 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:56.617823   24898 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:57:56.618366   24898 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:57:56.639555   24898 ssh_runner.go:195] Run: systemctl --version
	I1123 07:57:56.639622   24898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:57:56.658036   24898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:57:56.755450   24898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:57:56.755528   24898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:57:56.783943   24898 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:57:56.783964   24898 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:57:56.783970   24898 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:57:56.783976   24898 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:57:56.783981   24898 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:57:56.783986   24898 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:57:56.783990   24898 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:57:56.783997   24898 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:57:56.784002   24898 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:57:56.784010   24898 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:57:56.784019   24898 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:57:56.784024   24898 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:57:56.784033   24898 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:57:56.784038   24898 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:57:56.784042   24898 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:57:56.784053   24898 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:57:56.784061   24898 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:57:56.784068   24898 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:57:56.784072   24898 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:57:56.784077   24898 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:57:56.784082   24898 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:57:56.784087   24898 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:57:56.784091   24898 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:57:56.784096   24898 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:57:56.784100   24898 cri.go:89] found id: ""
	I1123 07:57:56.784134   24898 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:57:56.797937   24898 out.go:203] 
	W1123 07:57:56.798818   24898 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:57:56.798834   24898 out.go:285] * 
	* 
	W1123 07:57:56.801671   24898 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:57:56.802793   24898 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.237635ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-87jkk" [71097df3-1b14-4559-b01c-7084f8d00b53] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002901838s
addons_test.go:463: (dbg) Run:  kubectl --context addons-959783 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (259.012064ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:57:59.473361   26022 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:57:59.473752   26022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:59.473766   26022 out.go:374] Setting ErrFile to fd 2...
	I1123 07:57:59.473772   26022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:59.474026   26022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:57:59.474291   26022 mustload.go:66] Loading cluster: addons-959783
	I1123 07:57:59.474587   26022 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:59.474601   26022 addons.go:622] checking whether the cluster is paused
	I1123 07:57:59.474677   26022 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:59.474708   26022 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:57:59.475040   26022 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:57:59.494311   26022 ssh_runner.go:195] Run: systemctl --version
	I1123 07:57:59.494368   26022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:57:59.512854   26022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:57:59.618627   26022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:57:59.618819   26022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:57:59.651083   26022 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:57:59.651099   26022 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:57:59.651104   26022 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:57:59.651107   26022 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:57:59.651110   26022 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:57:59.651113   26022 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:57:59.651116   26022 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:57:59.651119   26022 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:57:59.651128   26022 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:57:59.651138   26022 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:57:59.651140   26022 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:57:59.651143   26022 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:57:59.651146   26022 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:57:59.651149   26022 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:57:59.651152   26022 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:57:59.651156   26022 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:57:59.651158   26022 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:57:59.651162   26022 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:57:59.651165   26022 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:57:59.651167   26022 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:57:59.651170   26022 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:57:59.651172   26022 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:57:59.651180   26022 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:57:59.651183   26022 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:57:59.651185   26022 cri.go:89] found id: ""
	I1123 07:57:59.651224   26022 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:57:59.664511   26022 out.go:203] 
	W1123 07:57:59.665630   26022 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:57:59.665652   26022 out.go:285] * 
	* 
	W1123 07:57:59.669990   26022 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:57:59.670989   26022 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 07:57:58.072064   14488 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 07:57:58.075098   14488 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 07:57:58.075118   14488 kapi.go:107] duration metric: took 3.078083ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.087592ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-959783 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-959783 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [ee5c1a26-f32b-4fbf-890b-eb812cc7716b] Pending
helpers_test.go:352: "task-pv-pod" [ee5c1a26-f32b-4fbf-890b-eb812cc7716b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [ee5c1a26-f32b-4fbf-890b-eb812cc7716b] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003550462s
addons_test.go:572: (dbg) Run:  kubectl --context addons-959783 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-959783 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-959783 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-959783 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-959783 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-959783 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-959783 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [42e2985b-8042-48f9-b0e1-2cdf03a786e0] Pending
helpers_test.go:352: "task-pv-pod-restore" [42e2985b-8042-48f9-b0e1-2cdf03a786e0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [42e2985b-8042-48f9-b0e1-2cdf03a786e0] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003676254s
addons_test.go:614: (dbg) Run:  kubectl --context addons-959783 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-959783 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-959783 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (235.361727ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:58:47.191492   28350 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:58:47.191776   28350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:58:47.191785   28350 out.go:374] Setting ErrFile to fd 2...
	I1123 07:58:47.191790   28350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:58:47.191979   28350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:58:47.192227   28350 mustload.go:66] Loading cluster: addons-959783
	I1123 07:58:47.192509   28350 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:58:47.192523   28350 addons.go:622] checking whether the cluster is paused
	I1123 07:58:47.192599   28350 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:58:47.192610   28350 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:58:47.192997   28350 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:58:47.210803   28350 ssh_runner.go:195] Run: systemctl --version
	I1123 07:58:47.210843   28350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:58:47.227044   28350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:58:47.324428   28350 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:58:47.324498   28350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:58:47.351895   28350 cri.go:89] found id: "ad1dfd5356782ae1a3eab35c55a8babfe8788ac17891691075fe655d8b74199b"
	I1123 07:58:47.351912   28350 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:58:47.351918   28350 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:58:47.351923   28350 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:58:47.351927   28350 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:58:47.351931   28350 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:58:47.351936   28350 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:58:47.351940   28350 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:58:47.351945   28350 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:58:47.351952   28350 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:58:47.351957   28350 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:58:47.351962   28350 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:58:47.351968   28350 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:58:47.351978   28350 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:58:47.351984   28350 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:58:47.352006   28350 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:58:47.352014   28350 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:58:47.352021   28350 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:58:47.352025   28350 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:58:47.352028   28350 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:58:47.352033   28350 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:58:47.352037   28350 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:58:47.352040   28350 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:58:47.352045   28350 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:58:47.352049   28350 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:58:47.352053   28350 cri.go:89] found id: ""
	I1123 07:58:47.352094   28350 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:58:47.364879   28350 out.go:203] 
	W1123 07:58:47.365952   28350 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:58:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:58:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:58:47.365969   28350 out.go:285] * 
	* 
	W1123 07:58:47.368933   28350 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:58:47.370002   28350 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (236.116419ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:58:47.426290   28428 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:58:47.426562   28428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:58:47.426572   28428 out.go:374] Setting ErrFile to fd 2...
	I1123 07:58:47.426576   28428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:58:47.426764   28428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:58:47.426997   28428 mustload.go:66] Loading cluster: addons-959783
	I1123 07:58:47.427334   28428 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:58:47.427348   28428 addons.go:622] checking whether the cluster is paused
	I1123 07:58:47.427426   28428 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:58:47.427437   28428 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:58:47.427771   28428 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:58:47.445673   28428 ssh_runner.go:195] Run: systemctl --version
	I1123 07:58:47.445742   28428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:58:47.462283   28428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:58:47.560623   28428 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:58:47.560701   28428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:58:47.587938   28428 cri.go:89] found id: "ad1dfd5356782ae1a3eab35c55a8babfe8788ac17891691075fe655d8b74199b"
	I1123 07:58:47.587954   28428 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:58:47.587958   28428 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:58:47.587962   28428 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:58:47.587965   28428 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:58:47.587968   28428 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:58:47.587971   28428 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:58:47.587973   28428 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:58:47.587976   28428 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:58:47.587981   28428 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:58:47.587985   28428 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:58:47.587989   28428 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:58:47.587993   28428 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:58:47.587996   28428 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:58:47.587999   28428 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:58:47.588011   28428 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:58:47.588017   28428 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:58:47.588022   28428 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:58:47.588025   28428 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:58:47.588027   28428 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:58:47.588030   28428 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:58:47.588033   28428 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:58:47.588035   28428 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:58:47.588038   28428 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:58:47.588041   28428 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:58:47.588043   28428 cri.go:89] found id: ""
	I1123 07:58:47.588080   28428 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:58:47.601401   28428 out.go:203] 
	W1123 07:58:47.602636   28428 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:58:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:58:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:58:47.602650   28428 out.go:285] * 
	* 
	W1123 07:58:47.605570   28428 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:58:47.606707   28428 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (49.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-959783 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-959783 --alsologtostderr -v=1: exit status 11 (237.901062ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:57:56.862205   25026 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:57:56.862365   25026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:56.862376   25026 out.go:374] Setting ErrFile to fd 2...
	I1123 07:57:56.862383   25026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:56.862578   25026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:57:56.862843   25026 mustload.go:66] Loading cluster: addons-959783
	I1123 07:57:56.863185   25026 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:56.863201   25026 addons.go:622] checking whether the cluster is paused
	I1123 07:57:56.863315   25026 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:56.863332   25026 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:57:56.863763   25026 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:57:56.881984   25026 ssh_runner.go:195] Run: systemctl --version
	I1123 07:57:56.882045   25026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:57:56.898683   25026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:57:56.995442   25026 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:57:56.995528   25026 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:57:57.022647   25026 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:57:57.022663   25026 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:57:57.022667   25026 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:57:57.022670   25026 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:57:57.022673   25026 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:57:57.022676   25026 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:57:57.022679   25026 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:57:57.022682   25026 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:57:57.022699   25026 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:57:57.022720   25026 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:57:57.022728   25026 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:57:57.022733   25026 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:57:57.022740   25026 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:57:57.022745   25026 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:57:57.022749   25026 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:57:57.022759   25026 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:57:57.022764   25026 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:57:57.022768   25026 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:57:57.022771   25026 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:57:57.022773   25026 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:57:57.022778   25026 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:57:57.022783   25026 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:57:57.022785   25026 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:57:57.022788   25026 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:57:57.022791   25026 cri.go:89] found id: ""
	I1123 07:57:57.022820   25026 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:57:57.035626   25026 out.go:203] 
	W1123 07:57:57.036522   25026 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:57:57.036534   25026 out.go:285] * 
	* 
	W1123 07:57:57.039555   25026 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:57:57.040539   25026 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-959783 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-959783
helpers_test.go:243: (dbg) docker inspect addons-959783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd",
	        "Created": "2025-11-23T07:55:55.928302435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T07:55:55.95678841Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd/854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd-json.log",
	        "Name": "/addons-959783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-959783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-959783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "854fc0b8c9862d9004f4b84da1861f89b5c89171cc91e3f04758077cb33a3cbd",
	                "LowerDir": "/var/lib/docker/overlay2/5a929a949d7a3fbf6a37cf0146c1192103ed2bee1529b031c4ed6f5ed4ac4c2d-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5a929a949d7a3fbf6a37cf0146c1192103ed2bee1529b031c4ed6f5ed4ac4c2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5a929a949d7a3fbf6a37cf0146c1192103ed2bee1529b031c4ed6f5ed4ac4c2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5a929a949d7a3fbf6a37cf0146c1192103ed2bee1529b031c4ed6f5ed4ac4c2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-959783",
	                "Source": "/var/lib/docker/volumes/addons-959783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-959783",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-959783",
	                "name.minikube.sigs.k8s.io": "addons-959783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d1069886fc94b823ca0e096eedae1bee7cf5427fc3f81535bf07d028296eb04a",
	            "SandboxKey": "/var/run/docker/netns/d1069886fc94",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-959783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a306ca547acc6c4434ea5deb2d8206350f1225a903e9f5ad0eda5ddcee5b3c23",
	                    "EndpointID": "bb7c8d51d293d902f617a6ac06a03bf1b2219034e7397fc6de5518a168ebd667",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "32:21:cd:96:00:65",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-959783",
	                        "854fc0b8c986"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-959783 -n addons-959783
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-959783 logs -n 25: (1.111953636s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-212537 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-212537   │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ delete  │ -p download-only-212537                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-212537   │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ start   │ -o=json --download-only -p download-only-071935 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-071935   │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ delete  │ -p download-only-071935                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-071935   │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ delete  │ -p download-only-212537                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-212537   │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ delete  │ -p download-only-071935                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-071935   │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ start   │ --download-only -p download-docker-372793 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-372793 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ delete  │ -p download-docker-372793                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-372793 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ start   │ --download-only -p binary-mirror-218443 --alsologtostderr --binary-mirror http://127.0.0.1:42195 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-218443   │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ delete  │ -p binary-mirror-218443                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-218443   │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ addons  │ enable dashboard -p addons-959783                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-959783                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ start   │ -p addons-959783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:57 UTC │
	│ addons  │ addons-959783 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ ssh     │ addons-959783 ssh cat /opt/local-path-provisioner/pvc-eb41d53f-743e-4287-8190-205dfc85238e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │ 23 Nov 25 07:57 UTC │
	│ addons  │ addons-959783 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ addons-959783 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-959783 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-959783          │ jenkins │ v1.37.0 │ 23 Nov 25 07:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:55:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:55:33.156710   15847 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:55:33.156793   15847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:33.156804   15847 out.go:374] Setting ErrFile to fd 2...
	I1123 07:55:33.156811   15847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:33.156986   15847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:55:33.157523   15847 out.go:368] Setting JSON to false
	I1123 07:55:33.158355   15847 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2280,"bootTime":1763882253,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 07:55:33.158407   15847 start.go:143] virtualization: kvm guest
	I1123 07:55:33.160000   15847 out.go:179] * [addons-959783] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 07:55:33.161382   15847 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 07:55:33.161385   15847 notify.go:221] Checking for updates...
	I1123 07:55:33.163498   15847 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:55:33.164549   15847 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 07:55:33.165464   15847 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 07:55:33.166443   15847 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 07:55:33.167386   15847 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 07:55:33.168631   15847 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:55:33.191423   15847 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 07:55:33.191503   15847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:33.245955   15847 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-23 07:55:33.236560259 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:33.246054   15847 docker.go:319] overlay module found
	I1123 07:55:33.247722   15847 out.go:179] * Using the docker driver based on user configuration
	I1123 07:55:33.248764   15847 start.go:309] selected driver: docker
	I1123 07:55:33.248775   15847 start.go:927] validating driver "docker" against <nil>
	I1123 07:55:33.248789   15847 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 07:55:33.249302   15847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:33.299624   15847 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-23 07:55:33.291203917 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:33.299801   15847 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:55:33.300022   15847 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 07:55:33.301577   15847 out.go:179] * Using Docker driver with root privileges
	I1123 07:55:33.302791   15847 cni.go:84] Creating CNI manager for ""
	I1123 07:55:33.302872   15847 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:55:33.302889   15847 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 07:55:33.302975   15847 start.go:353] cluster config:
	{Name:addons-959783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-959783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 07:55:33.304255   15847 out.go:179] * Starting "addons-959783" primary control-plane node in "addons-959783" cluster
	I1123 07:55:33.305222   15847 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 07:55:33.306287   15847 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 07:55:33.307446   15847 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:55:33.307469   15847 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 07:55:33.307476   15847 cache.go:65] Caching tarball of preloaded images
	I1123 07:55:33.307515   15847 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 07:55:33.307567   15847 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 07:55:33.307583   15847 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 07:55:33.307971   15847 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/config.json ...
	I1123 07:55:33.307996   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/config.json: {Name:mk2fb98b4f63c3df0dc6c7df814c098f300b1dfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:33.322541   15847 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 07:55:33.322645   15847 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 07:55:33.322660   15847 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 07:55:33.322664   15847 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 07:55:33.322674   15847 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 07:55:33.322678   15847 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 07:55:45.535843   15847 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 07:55:45.535878   15847 cache.go:243] Successfully downloaded all kic artifacts
	I1123 07:55:45.535928   15847 start.go:360] acquireMachinesLock for addons-959783: {Name:mkf4aef4d0f867e43fc9f52726964683306a64ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 07:55:45.536036   15847 start.go:364] duration metric: took 85.826µs to acquireMachinesLock for "addons-959783"
	I1123 07:55:45.536065   15847 start.go:93] Provisioning new machine with config: &{Name:addons-959783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-959783 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 07:55:45.536150   15847 start.go:125] createHost starting for "" (driver="docker")
	I1123 07:55:45.537734   15847 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 07:55:45.537940   15847 start.go:159] libmachine.API.Create for "addons-959783" (driver="docker")
	I1123 07:55:45.537976   15847 client.go:173] LocalClient.Create starting
	I1123 07:55:45.538076   15847 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 07:55:45.586531   15847 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 07:55:45.646616   15847 cli_runner.go:164] Run: docker network inspect addons-959783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 07:55:45.663032   15847 cli_runner.go:211] docker network inspect addons-959783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 07:55:45.663097   15847 network_create.go:284] running [docker network inspect addons-959783] to gather additional debugging logs...
	I1123 07:55:45.663112   15847 cli_runner.go:164] Run: docker network inspect addons-959783
	W1123 07:55:45.678234   15847 cli_runner.go:211] docker network inspect addons-959783 returned with exit code 1
	I1123 07:55:45.678256   15847 network_create.go:287] error running [docker network inspect addons-959783]: docker network inspect addons-959783: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-959783 not found
	I1123 07:55:45.678266   15847 network_create.go:289] output of [docker network inspect addons-959783]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-959783 not found
	
	** /stderr **
	I1123 07:55:45.678356   15847 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 07:55:45.693283   15847 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cc5bd0}
	I1123 07:55:45.693320   15847 network_create.go:124] attempt to create docker network addons-959783 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 07:55:45.693375   15847 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-959783 addons-959783
	I1123 07:55:45.734356   15847 network_create.go:108] docker network addons-959783 192.168.49.0/24 created
	I1123 07:55:45.734378   15847 kic.go:121] calculated static IP "192.168.49.2" for the "addons-959783" container
	I1123 07:55:45.734448   15847 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 07:55:45.749039   15847 cli_runner.go:164] Run: docker volume create addons-959783 --label name.minikube.sigs.k8s.io=addons-959783 --label created_by.minikube.sigs.k8s.io=true
	I1123 07:55:45.765125   15847 oci.go:103] Successfully created a docker volume addons-959783
	I1123 07:55:45.765195   15847 cli_runner.go:164] Run: docker run --rm --name addons-959783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-959783 --entrypoint /usr/bin/test -v addons-959783:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 07:55:51.574936   15847 cli_runner.go:217] Completed: docker run --rm --name addons-959783-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-959783 --entrypoint /usr/bin/test -v addons-959783:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (5.80968291s)
	I1123 07:55:51.574961   15847 oci.go:107] Successfully prepared a docker volume addons-959783
	I1123 07:55:51.575021   15847 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:55:51.575032   15847 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 07:55:51.575079   15847 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-959783:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 07:55:55.859348   15847 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-959783:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.284238399s)
	I1123 07:55:55.859377   15847 kic.go:203] duration metric: took 4.284340892s to extract preloaded images to volume ...
	W1123 07:55:55.859478   15847 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 07:55:55.859510   15847 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 07:55:55.859558   15847 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 07:55:55.913230   15847 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-959783 --name addons-959783 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-959783 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-959783 --network addons-959783 --ip 192.168.49.2 --volume addons-959783:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 07:55:56.208190   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Running}}
	I1123 07:55:56.226653   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:55:56.243915   15847 cli_runner.go:164] Run: docker exec addons-959783 stat /var/lib/dpkg/alternatives/iptables
	I1123 07:55:56.286955   15847 oci.go:144] the created container "addons-959783" has a running status.
	I1123 07:55:56.286985   15847 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa...
	I1123 07:55:56.428216   15847 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 07:55:56.455347   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:55:56.475310   15847 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 07:55:56.475334   15847 kic_runner.go:114] Args: [docker exec --privileged addons-959783 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 07:55:56.527269   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:55:56.548370   15847 machine.go:94] provisionDockerMachine start ...
	I1123 07:55:56.548453   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:56.567568   15847 main.go:143] libmachine: Using SSH client type: native
	I1123 07:55:56.567932   15847 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 07:55:56.567952   15847 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 07:55:56.710791   15847 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-959783
	
	I1123 07:55:56.710822   15847 ubuntu.go:182] provisioning hostname "addons-959783"
	I1123 07:55:56.710877   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:56.729728   15847 main.go:143] libmachine: Using SSH client type: native
	I1123 07:55:56.730053   15847 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 07:55:56.730077   15847 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-959783 && echo "addons-959783" | sudo tee /etc/hostname
	I1123 07:55:56.878596   15847 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-959783
	
	I1123 07:55:56.878663   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:56.897038   15847 main.go:143] libmachine: Using SSH client type: native
	I1123 07:55:56.897304   15847 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 07:55:56.897332   15847 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-959783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-959783/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-959783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 07:55:57.034156   15847 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 07:55:57.034182   15847 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 07:55:57.034218   15847 ubuntu.go:190] setting up certificates
	I1123 07:55:57.034234   15847 provision.go:84] configureAuth start
	I1123 07:55:57.034282   15847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-959783
	I1123 07:55:57.050842   15847 provision.go:143] copyHostCerts
	I1123 07:55:57.050901   15847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 07:55:57.051014   15847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 07:55:57.051086   15847 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 07:55:57.051151   15847 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.addons-959783 san=[127.0.0.1 192.168.49.2 addons-959783 localhost minikube]
	I1123 07:55:57.189965   15847 provision.go:177] copyRemoteCerts
	I1123 07:55:57.190010   15847 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 07:55:57.190039   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.205585   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:55:57.303652   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 07:55:57.320794   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 07:55:57.336155   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 07:55:57.351411   15847 provision.go:87] duration metric: took 317.163506ms to configureAuth
	I1123 07:55:57.351437   15847 ubuntu.go:206] setting minikube options for container-runtime
	I1123 07:55:57.351585   15847 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:55:57.351675   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.368431   15847 main.go:143] libmachine: Using SSH client type: native
	I1123 07:55:57.368628   15847 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 07:55:57.368646   15847 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 07:55:57.634346   15847 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 07:55:57.634375   15847 machine.go:97] duration metric: took 1.085984545s to provisionDockerMachine
	I1123 07:55:57.634390   15847 client.go:176] duration metric: took 12.096402844s to LocalClient.Create
	I1123 07:55:57.634415   15847 start.go:167] duration metric: took 12.09647401s to libmachine.API.Create "addons-959783"
	I1123 07:55:57.634427   15847 start.go:293] postStartSetup for "addons-959783" (driver="docker")
	I1123 07:55:57.634441   15847 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 07:55:57.634512   15847 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 07:55:57.634562   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.651185   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:55:57.750204   15847 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 07:55:57.753127   15847 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 07:55:57.753148   15847 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 07:55:57.753158   15847 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 07:55:57.753208   15847 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 07:55:57.753231   15847 start.go:296] duration metric: took 118.797117ms for postStartSetup
	I1123 07:55:57.753482   15847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-959783
	I1123 07:55:57.770391   15847 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/config.json ...
	I1123 07:55:57.770645   15847 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 07:55:57.770713   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.786281   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:55:57.880923   15847 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 07:55:57.884924   15847 start.go:128] duration metric: took 12.348761677s to createHost
	I1123 07:55:57.884946   15847 start.go:83] releasing machines lock for "addons-959783", held for 12.348894314s
	I1123 07:55:57.884999   15847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-959783
	I1123 07:55:57.900652   15847 ssh_runner.go:195] Run: cat /version.json
	I1123 07:55:57.900712   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.900741   15847 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 07:55:57.900815   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:55:57.917123   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:55:57.917896   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:55:58.063638   15847 ssh_runner.go:195] Run: systemctl --version
	I1123 07:55:58.069307   15847 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 07:55:58.099821   15847 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 07:55:58.103846   15847 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 07:55:58.103888   15847 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 07:55:58.126345   15847 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 07:55:58.126362   15847 start.go:496] detecting cgroup driver to use...
	I1123 07:55:58.126392   15847 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 07:55:58.126432   15847 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 07:55:58.140333   15847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 07:55:58.150824   15847 docker.go:218] disabling cri-docker service (if available) ...
	I1123 07:55:58.150865   15847 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 07:55:58.165184   15847 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 07:55:58.180098   15847 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 07:55:58.259944   15847 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 07:55:58.340461   15847 docker.go:234] disabling docker service ...
	I1123 07:55:58.340513   15847 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 07:55:58.356115   15847 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 07:55:58.367059   15847 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 07:55:58.443671   15847 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 07:55:58.518994   15847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 07:55:58.529646   15847 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 07:55:58.542104   15847 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 07:55:58.542154   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.551010   15847 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 07:55:58.551051   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.558715   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.566137   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.573573   15847 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 07:55:58.580451   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.587778   15847 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.599655   15847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 07:55:58.607079   15847 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 07:55:58.613355   15847 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1123 07:55:58.613389   15847 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1123 07:55:58.623831   15847 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 07:55:58.630202   15847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:55:58.700718   15847 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 07:55:58.827515   15847 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 07:55:58.827583   15847 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 07:55:58.831266   15847 start.go:564] Will wait 60s for crictl version
	I1123 07:55:58.831310   15847 ssh_runner.go:195] Run: which crictl
	I1123 07:55:58.834550   15847 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 07:55:58.855854   15847 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 07:55:58.855957   15847 ssh_runner.go:195] Run: crio --version
	I1123 07:55:58.881234   15847 ssh_runner.go:195] Run: crio --version
	I1123 07:55:58.907281   15847 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 07:55:58.908341   15847 cli_runner.go:164] Run: docker network inspect addons-959783 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 07:55:58.924162   15847 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 07:55:58.927673   15847 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 07:55:58.936828   15847 kubeadm.go:884] updating cluster {Name:addons-959783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-959783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 07:55:58.936916   15847 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 07:55:58.936952   15847 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 07:55:58.965029   15847 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 07:55:58.965044   15847 crio.go:433] Images already preloaded, skipping extraction
	I1123 07:55:58.965077   15847 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 07:55:58.987295   15847 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 07:55:58.987312   15847 cache_images.go:86] Images are preloaded, skipping loading
	I1123 07:55:58.987321   15847 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 07:55:58.987405   15847 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-959783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-959783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 07:55:58.987490   15847 ssh_runner.go:195] Run: crio config
	I1123 07:55:59.027327   15847 cni.go:84] Creating CNI manager for ""
	I1123 07:55:59.027353   15847 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:55:59.027369   15847 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 07:55:59.027389   15847 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-959783 NodeName:addons-959783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 07:55:59.027511   15847 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-959783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 07:55:59.027572   15847 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 07:55:59.034633   15847 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 07:55:59.034679   15847 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 07:55:59.041561   15847 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 07:55:59.052801   15847 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 07:55:59.066318   15847 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1123 07:55:59.077257   15847 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 07:55:59.080374   15847 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 07:55:59.088928   15847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:55:59.164003   15847 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 07:55:59.186570   15847 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783 for IP: 192.168.49.2
	I1123 07:55:59.186589   15847 certs.go:195] generating shared ca certs ...
	I1123 07:55:59.186608   15847 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.186752   15847 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 07:55:59.312301   15847 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt ...
	I1123 07:55:59.312322   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt: {Name:mkf9ae3aa353a1038c3c9284f3b747dfb88e5a7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.312457   15847 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key ...
	I1123 07:55:59.312467   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key: {Name:mk2a71d7a34a8fc26d229e9c3bec7fe566491a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.312537   15847 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 07:55:59.357772   15847 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt ...
	I1123 07:55:59.357791   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt: {Name:mk1712ce5ec45204d6baf790505c850656fa6dfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.357948   15847 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key ...
	I1123 07:55:59.357960   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key: {Name:mka6eeff402c2a4034a73a12e7cc509daf81884d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.358028   15847 certs.go:257] generating profile certs ...
	I1123 07:55:59.358091   15847 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.key
	I1123 07:55:59.358106   15847 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt with IP's: []
	I1123 07:55:59.395650   15847 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt ...
	I1123 07:55:59.395665   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: {Name:mk67858a934f6b320447a88246696849506d01ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.395778   15847 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.key ...
	I1123 07:55:59.395788   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.key: {Name:mkf39414074f91513fe9b576d592bf8e68eec103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.395851   15847 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key.7ae8d156
	I1123 07:55:59.395868   15847 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt.7ae8d156 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 07:55:59.424071   15847 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt.7ae8d156 ...
	I1123 07:55:59.424084   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt.7ae8d156: {Name:mk26beb0192f2f4e60dbbbd4abed4e3d12e48fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.424169   15847 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key.7ae8d156 ...
	I1123 07:55:59.424181   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key.7ae8d156: {Name:mke123914847a03e89813cba5428a8cf87a25d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.424243   15847 certs.go:382] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt.7ae8d156 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt
	I1123 07:55:59.424322   15847 certs.go:386] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key.7ae8d156 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key
	I1123 07:55:59.424372   15847 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.key
	I1123 07:55:59.424388   15847 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.crt with IP's: []
	I1123 07:55:59.524614   15847 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.crt ...
	I1123 07:55:59.524633   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.crt: {Name:mka0e42674bf934edeecfcd2657510a7d7d26a26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.524755   15847 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.key ...
	I1123 07:55:59.524766   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.key: {Name:mk38ad5b056b853f9ef7993f6960383df204de9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:55:59.524935   15847 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 07:55:59.524969   15847 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 07:55:59.524995   15847 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 07:55:59.525019   15847 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 07:55:59.525545   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 07:55:59.542197   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 07:55:59.558015   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 07:55:59.573485   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 07:55:59.588804   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 07:55:59.603886   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 07:55:59.619248   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 07:55:59.634502   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 07:55:59.649842   15847 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 07:55:59.666797   15847 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 07:55:59.677772   15847 ssh_runner.go:195] Run: openssl version
	I1123 07:55:59.683231   15847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 07:55:59.692799   15847 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:55:59.695974   15847 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:55:59.696013   15847 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 07:55:59.728927   15847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 07:55:59.736451   15847 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 07:55:59.739474   15847 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 07:55:59.739525   15847 kubeadm.go:401] StartCluster: {Name:addons-959783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-959783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 07:55:59.739595   15847 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:55:59.739645   15847 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:55:59.764548   15847 cri.go:89] found id: ""
	I1123 07:55:59.764598   15847 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 07:55:59.771557   15847 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 07:55:59.778360   15847 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 07:55:59.778427   15847 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 07:55:59.785079   15847 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 07:55:59.785093   15847 kubeadm.go:158] found existing configuration files:
	
	I1123 07:55:59.785124   15847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 07:55:59.791672   15847 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 07:55:59.791721   15847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 07:55:59.798113   15847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 07:55:59.804641   15847 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 07:55:59.804675   15847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 07:55:59.811086   15847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 07:55:59.817832   15847 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 07:55:59.817883   15847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 07:55:59.824288   15847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 07:55:59.830726   15847 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 07:55:59.830766   15847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 07:55:59.837047   15847 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 07:55:59.870472   15847 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 07:55:59.870547   15847 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 07:55:59.898778   15847 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 07:55:59.898864   15847 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 07:55:59.898916   15847 kubeadm.go:319] OS: Linux
	I1123 07:55:59.898972   15847 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 07:55:59.899035   15847 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 07:55:59.899099   15847 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 07:55:59.899162   15847 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 07:55:59.899230   15847 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 07:55:59.899303   15847 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 07:55:59.899394   15847 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 07:55:59.899472   15847 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 07:55:59.951560   15847 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 07:55:59.951697   15847 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 07:55:59.951835   15847 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 07:55:59.958446   15847 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 07:55:59.960281   15847 out.go:252]   - Generating certificates and keys ...
	I1123 07:55:59.960384   15847 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 07:55:59.960475   15847 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 07:56:00.567610   15847 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 07:56:00.781734   15847 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 07:56:01.026817   15847 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 07:56:01.633126   15847 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 07:56:01.727367   15847 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 07:56:01.727515   15847 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-959783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 07:56:02.199062   15847 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 07:56:02.199220   15847 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-959783 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 07:56:02.401047   15847 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 07:56:02.885187   15847 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 07:56:03.085556   15847 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 07:56:03.085644   15847 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 07:56:03.166832   15847 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 07:56:03.582969   15847 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 07:56:03.870715   15847 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 07:56:04.161284   15847 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 07:56:04.462888   15847 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 07:56:04.463364   15847 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 07:56:04.466793   15847 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 07:56:04.468064   15847 out.go:252]   - Booting up control plane ...
	I1123 07:56:04.468145   15847 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 07:56:04.468208   15847 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 07:56:04.468805   15847 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 07:56:04.481217   15847 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 07:56:04.481315   15847 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 07:56:04.487202   15847 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 07:56:04.487442   15847 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 07:56:04.487489   15847 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 07:56:04.580716   15847 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 07:56:04.580886   15847 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 07:56:05.582129   15847 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001571554s
	I1123 07:56:05.585830   15847 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 07:56:05.585948   15847 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 07:56:05.586060   15847 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 07:56:05.586187   15847 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 07:56:07.482152   15847 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.896256032s
	I1123 07:56:07.843937   15847 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.258082644s
	I1123 07:56:09.587741   15847 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00182609s
	I1123 07:56:09.597413   15847 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 07:56:09.605848   15847 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 07:56:09.613241   15847 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 07:56:09.613490   15847 kubeadm.go:319] [mark-control-plane] Marking the node addons-959783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 07:56:09.619770   15847 kubeadm.go:319] [bootstrap-token] Using token: 3f5cqk.xr5m0zrekevhko6l
	I1123 07:56:09.620991   15847 out.go:252]   - Configuring RBAC rules ...
	I1123 07:56:09.621157   15847 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 07:56:09.623460   15847 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 07:56:09.627525   15847 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 07:56:09.630594   15847 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 07:56:09.632619   15847 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 07:56:09.634531   15847 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 07:56:09.992927   15847 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 07:56:10.404794   15847 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 07:56:10.992793   15847 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 07:56:10.993858   15847 kubeadm.go:319] 
	I1123 07:56:10.993946   15847 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 07:56:10.993956   15847 kubeadm.go:319] 
	I1123 07:56:10.994066   15847 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 07:56:10.994076   15847 kubeadm.go:319] 
	I1123 07:56:10.994118   15847 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 07:56:10.994218   15847 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 07:56:10.994306   15847 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 07:56:10.994318   15847 kubeadm.go:319] 
	I1123 07:56:10.994398   15847 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 07:56:10.994407   15847 kubeadm.go:319] 
	I1123 07:56:10.994471   15847 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 07:56:10.994480   15847 kubeadm.go:319] 
	I1123 07:56:10.994549   15847 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 07:56:10.994652   15847 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 07:56:10.994765   15847 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 07:56:10.994774   15847 kubeadm.go:319] 
	I1123 07:56:10.994883   15847 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 07:56:10.994982   15847 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 07:56:10.994990   15847 kubeadm.go:319] 
	I1123 07:56:10.995085   15847 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3f5cqk.xr5m0zrekevhko6l \
	I1123 07:56:10.995186   15847 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c \
	I1123 07:56:10.995220   15847 kubeadm.go:319] 	--control-plane 
	I1123 07:56:10.995231   15847 kubeadm.go:319] 
	I1123 07:56:10.995338   15847 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 07:56:10.995350   15847 kubeadm.go:319] 
	I1123 07:56:10.995449   15847 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3f5cqk.xr5m0zrekevhko6l \
	I1123 07:56:10.995566   15847 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c 
	I1123 07:56:10.997091   15847 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 07:56:10.997205   15847 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 07:56:10.997234   15847 cni.go:84] Creating CNI manager for ""
	I1123 07:56:10.997245   15847 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 07:56:10.999367   15847 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 07:56:11.000431   15847 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 07:56:11.004842   15847 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 07:56:11.004859   15847 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 07:56:11.016560   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 07:56:11.199572   15847 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 07:56:11.199638   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:11.199682   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-959783 minikube.k8s.io/updated_at=2025_11_23T07_56_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=addons-959783 minikube.k8s.io/primary=true
	I1123 07:56:11.209467   15847 ops.go:34] apiserver oom_adj: -16
	I1123 07:56:11.273678   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:11.774332   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:12.273731   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:12.773788   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:13.274151   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:13.774748   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:14.274471   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:14.773959   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:15.274487   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:15.773737   15847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 07:56:15.844058   15847 kubeadm.go:1114] duration metric: took 4.644475758s to wait for elevateKubeSystemPrivileges
	I1123 07:56:15.844092   15847 kubeadm.go:403] duration metric: took 16.104570337s to StartCluster
	I1123 07:56:15.844113   15847 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:15.844232   15847 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 07:56:15.844843   15847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 07:56:15.845228   15847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 07:56:15.845281   15847 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 07:56:15.845372   15847 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 07:56:15.845438   15847 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:56:15.845507   15847 addons.go:70] Setting gcp-auth=true in profile "addons-959783"
	I1123 07:56:15.845514   15847 addons.go:70] Setting yakd=true in profile "addons-959783"
	I1123 07:56:15.845526   15847 mustload.go:66] Loading cluster: addons-959783
	I1123 07:56:15.845529   15847 addons.go:239] Setting addon yakd=true in "addons-959783"
	I1123 07:56:15.845550   15847 addons.go:70] Setting registry=true in profile "addons-959783"
	I1123 07:56:15.845564   15847 addons.go:70] Setting registry-creds=true in profile "addons-959783"
	I1123 07:56:15.845573   15847 addons.go:239] Setting addon registry=true in "addons-959783"
	I1123 07:56:15.845586   15847 addons.go:239] Setting addon registry-creds=true in "addons-959783"
	I1123 07:56:15.845601   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.845620   15847 addons.go:70] Setting volcano=true in profile "addons-959783"
	I1123 07:56:15.845636   15847 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-959783"
	I1123 07:56:15.845656   15847 addons.go:239] Setting addon volcano=true in "addons-959783"
	I1123 07:56:15.845662   15847 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:56:15.845672   15847 addons.go:70] Setting storage-provisioner=true in profile "addons-959783"
	I1123 07:56:15.845702   15847 addons.go:239] Setting addon storage-provisioner=true in "addons-959783"
	I1123 07:56:15.845742   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.845747   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.845773   15847 addons.go:70] Setting cloud-spanner=true in profile "addons-959783"
	I1123 07:56:15.845841   15847 addons.go:239] Setting addon cloud-spanner=true in "addons-959783"
	I1123 07:56:15.845886   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.845998   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.846229   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.846391   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.845556   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.846776   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.845657   15847 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-959783"
	I1123 07:56:15.846928   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.847269   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.847603   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.847742   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.846038   15847 addons.go:70] Setting inspektor-gadget=true in profile "addons-959783"
	I1123 07:56:15.848007   15847 addons.go:239] Setting addon inspektor-gadget=true in "addons-959783"
	I1123 07:56:15.848037   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.848158   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.846059   15847 addons.go:70] Setting ingress-dns=true in profile "addons-959783"
	I1123 07:56:15.848433   15847 addons.go:239] Setting addon ingress-dns=true in "addons-959783"
	I1123 07:56:15.848473   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.848656   15847 out.go:179] * Verifying Kubernetes components...
	I1123 07:56:15.846069   15847 addons.go:70] Setting volumesnapshots=true in profile "addons-959783"
	I1123 07:56:15.848914   15847 addons.go:239] Setting addon volumesnapshots=true in "addons-959783"
	I1123 07:56:15.849003   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.846048   15847 addons.go:70] Setting ingress=true in profile "addons-959783"
	I1123 07:56:15.849080   15847 addons.go:239] Setting addon ingress=true in "addons-959783"
	I1123 07:56:15.849111   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.849561   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.850062   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.846087   15847 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-959783"
	I1123 07:56:15.850458   15847 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-959783"
	I1123 07:56:15.846097   15847 addons.go:70] Setting metrics-server=true in profile "addons-959783"
	I1123 07:56:15.846129   15847 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-959783"
	I1123 07:56:15.846139   15847 addons.go:70] Setting default-storageclass=true in profile "addons-959783"
	I1123 07:56:15.846076   15847 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-959783"
	I1123 07:56:15.850747   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.850868   15847 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-959783"
	I1123 07:56:15.850897   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.850923   15847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 07:56:15.851321   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.851326   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.850749   15847 addons.go:239] Setting addon metrics-server=true in "addons-959783"
	I1123 07:56:15.852553   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.850801   15847 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-959783"
	I1123 07:56:15.853291   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.853943   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.850819   15847 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-959783"
	I1123 07:56:15.855668   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.855730   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.861424   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.864312   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	W1123 07:56:15.892065   15847 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 07:56:15.917887   15847 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 07:56:15.918129   15847 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1123 07:56:15.919257   15847 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 07:56:15.919847   15847 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 07:56:15.919865   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 07:56:15.919921   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.921162   15847 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 07:56:15.922059   15847 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 07:56:15.922072   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 07:56:15.922142   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.922573   15847 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 07:56:15.922795   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 07:56:15.923155   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.926699   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.932718   15847 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:56:15.935229   15847 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 07:56:15.937291   15847 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:56:15.938678   15847 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-959783"
	I1123 07:56:15.938731   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.939060   15847 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 07:56:15.939105   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 07:56:15.939171   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.939174   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.943699   15847 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 07:56:15.944722   15847 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 07:56:15.944765   15847 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 07:56:15.944834   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.957529   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 07:56:15.957645   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 07:56:15.958741   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 07:56:15.958758   15847 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 07:56:15.958811   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.958962   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 07:56:15.960043   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 07:56:15.961154   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 07:56:15.963614   15847 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 07:56:15.963915   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 07:56:15.966316   15847 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 07:56:15.966337   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 07:56:15.966387   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.966462   15847 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 07:56:15.967725   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 07:56:15.967761   15847 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 07:56:15.967776   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 07:56:15.967847   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.971386   15847 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 07:56:15.971961   15847 addons.go:239] Setting addon default-storageclass=true in "addons-959783"
	I1123 07:56:15.972158   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:15.972453   15847 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 07:56:15.972465   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 07:56:15.972506   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.973793   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 07:56:15.974007   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:15.976177   15847 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 07:56:15.976762   15847 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 07:56:15.980369   15847 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 07:56:15.980418   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 07:56:15.980494   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.983792   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 07:56:15.983812   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 07:56:15.983860   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:15.988218   15847 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 07:56:15.989269   15847 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 07:56:15.989287   15847 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 07:56:15.989336   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:16.002397   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.004958   15847 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 07:56:16.006324   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.007000   15847 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 07:56:16.009145   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 07:56:16.009198   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:16.014581   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.017837   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.024846   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.030006   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.042760   15847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 07:56:16.047148   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.047727   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.056510   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.058822   15847 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 07:56:16.058856   15847 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 07:56:16.058902   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:16.064793   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.067834   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.069224   15847 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 07:56:16.070339   15847 out.go:179]   - Using image docker.io/busybox:stable
	I1123 07:56:16.071702   15847 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 07:56:16.071725   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 07:56:16.071780   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	W1123 07:56:16.073091   15847 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 07:56:16.073117   15847 retry.go:31] will retry after 264.431648ms: ssh: handshake failed: EOF
	I1123 07:56:16.080045   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.096321   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	W1123 07:56:16.099369   15847 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 07:56:16.099393   15847 retry.go:31] will retry after 343.425901ms: ssh: handshake failed: EOF
	I1123 07:56:16.104499   15847 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 07:56:16.117096   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.119006   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:16.188657   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 07:56:16.189794   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 07:56:16.194133   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 07:56:16.210247   15847 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 07:56:16.210277   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 07:56:16.214026   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 07:56:16.224957   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 07:56:16.237052   15847 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 07:56:16.237075   15847 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 07:56:16.239175   15847 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 07:56:16.239248   15847 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 07:56:16.239561   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 07:56:16.239575   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 07:56:16.245835   15847 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 07:56:16.245904   15847 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 07:56:16.251175   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 07:56:16.266499   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 07:56:16.275491   15847 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 07:56:16.275580   15847 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 07:56:16.278425   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 07:56:16.278647   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 07:56:16.278616   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 07:56:16.287596   15847 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 07:56:16.287659   15847 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 07:56:16.295311   15847 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 07:56:16.295327   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 07:56:16.298645   15847 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 07:56:16.298702   15847 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 07:56:16.315674   15847 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 07:56:16.315871   15847 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 07:56:16.315801   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 07:56:16.315929   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 07:56:16.343193   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 07:56:16.350131   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 07:56:16.350153   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 07:56:16.352213   15847 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 07:56:16.352228   15847 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 07:56:16.359450   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 07:56:16.381927   15847 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 07:56:16.381954   15847 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 07:56:16.398847   15847 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 07:56:16.400630   15847 node_ready.go:35] waiting up to 6m0s for node "addons-959783" to be "Ready" ...
	I1123 07:56:16.400971   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 07:56:16.400994   15847 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 07:56:16.418413   15847 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 07:56:16.418434   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 07:56:16.434577   15847 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 07:56:16.434598   15847 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 07:56:16.466787   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 07:56:16.466805   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 07:56:16.496838   15847 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:56:16.496860   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 07:56:16.501325   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 07:56:16.517772   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 07:56:16.517851   15847 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 07:56:16.530047   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:56:16.564361   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 07:56:16.564437   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 07:56:16.571490   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 07:56:16.660436   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 07:56:16.660460   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 07:56:16.679931   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 07:56:16.685727   15847 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 07:56:16.685821   15847 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 07:56:16.716613   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 07:56:16.908258   15847 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-959783" context rescaled to 1 replicas
	I1123 07:56:17.382406   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.193648945s)
	I1123 07:56:17.382445   15847 addons.go:495] Verifying addon ingress=true in "addons-959783"
	I1123 07:56:17.382593   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.188418241s)
	I1123 07:56:17.382534   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.192712172s)
	I1123 07:56:17.382663   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.168611763s)
	I1123 07:56:17.382728   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.157742809s)
	I1123 07:56:17.382776   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.131543831s)
	I1123 07:56:17.382824   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.116305488s)
	I1123 07:56:17.382895   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.104177084s)
	I1123 07:56:17.382992   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.0397759s)
	I1123 07:56:17.383012   15847 addons.go:495] Verifying addon registry=true in "addons-959783"
	I1123 07:56:17.383072   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.023597025s)
	I1123 07:56:17.383165   15847 addons.go:495] Verifying addon metrics-server=true in "addons-959783"
	I1123 07:56:17.384874   15847 out.go:179] * Verifying ingress addon...
	I1123 07:56:17.384885   15847 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-959783 service yakd-dashboard -n yakd-dashboard
	
	I1123 07:56:17.384958   15847 out.go:179] * Verifying registry addon...
	I1123 07:56:17.387059   15847 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 07:56:17.387063   15847 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W1123 07:56:17.388266   15847 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1123 07:56:17.389588   15847 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 07:56:17.389602   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:17.389862   15847 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 07:56:17.389875   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:17.749879   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.219792764s)
	I1123 07:56:17.749917   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.178406529s)
	W1123 07:56:17.749931   15847 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 07:56:17.749957   15847 retry.go:31] will retry after 205.163164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 07:56:17.750006   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.069973495s)
	I1123 07:56:17.750175   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.033527273s)
	I1123 07:56:17.750191   15847 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-959783"
	I1123 07:56:17.751448   15847 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 07:56:17.753513   15847 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 07:56:17.755751   15847 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 07:56:17.755771   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:17.890088   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:17.890200   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:17.955649   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 07:56:18.255843   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:18.389377   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:18.389526   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:18.403317   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:18.756311   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:18.889579   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:18.889681   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:19.255518   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:19.390034   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:19.390222   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:19.756383   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:19.889495   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:19.889746   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:20.256292   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:20.360403   15847 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.404714891s)
	I1123 07:56:20.389919   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:20.390109   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:20.756121   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:20.889313   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:20.889492   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:20.902826   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:21.255989   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:21.389982   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:21.390158   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:21.756673   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:21.889612   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:21.889779   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:22.256624   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:22.389521   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:22.389709   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:22.756012   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:22.890006   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:22.890151   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:23.255850   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:23.389203   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:23.389355   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:23.402815   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:23.535718   15847 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 07:56:23.535781   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:23.552345   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:23.661619   15847 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 07:56:23.673167   15847 addons.go:239] Setting addon gcp-auth=true in "addons-959783"
	I1123 07:56:23.673222   15847 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:56:23.673554   15847 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:56:23.690736   15847 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 07:56:23.690783   15847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:56:23.706743   15847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:56:23.757019   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:23.802566   15847 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 07:56:23.803597   15847 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 07:56:23.804679   15847 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 07:56:23.804706   15847 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 07:56:23.816750   15847 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 07:56:23.816765   15847 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 07:56:23.828366   15847 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 07:56:23.828383   15847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 07:56:23.839875   15847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 07:56:23.890365   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:23.890445   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:24.125585   15847 addons.go:495] Verifying addon gcp-auth=true in "addons-959783"
	I1123 07:56:24.126911   15847 out.go:179] * Verifying gcp-auth addon...
	I1123 07:56:24.128444   15847 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 07:56:24.131431   15847 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 07:56:24.131447   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:24.256339   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:24.389469   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:24.389748   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:24.631614   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:24.755711   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:24.889779   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:24.890029   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:25.130998   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:25.256066   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:25.390075   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:25.390298   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:25.402877   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:25.631106   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:25.756213   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:25.889533   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:25.889569   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:26.131230   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:26.256371   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:26.389399   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:26.389585   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:26.631336   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:26.756525   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:26.889647   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:26.889958   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:27.130942   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:27.256299   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:27.389013   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:27.389247   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:27.630638   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:27.755484   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:27.889629   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:27.889696   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:27.903229   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:28.131540   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:28.255658   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:28.389742   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:28.389925   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:28.632090   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:28.755946   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:28.890062   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:28.890102   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:29.130765   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:29.256197   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:29.389562   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:29.389637   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:29.631806   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:29.755816   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:29.889958   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:29.890018   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:30.130648   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:30.255790   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:30.389920   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:30.390175   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:30.402595   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:30.631009   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:30.756076   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:30.889436   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:30.889516   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:31.131392   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:31.256824   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:31.389760   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:31.390019   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:31.630964   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:31.756130   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:31.889071   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:31.889218   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:32.131056   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:32.256399   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:32.389441   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:32.389614   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:32.403240   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:32.631573   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:32.755634   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:32.889832   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:32.889888   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:33.131738   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:33.256259   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:33.389370   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:33.389421   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:33.631384   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:33.756547   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:33.889726   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:33.889937   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:34.131371   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:34.256522   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:34.389424   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:34.389530   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:34.631466   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:34.756399   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:34.889456   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:34.889513   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:34.903008   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:35.131076   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:35.256190   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:35.389100   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:35.389290   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:35.631098   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:35.756018   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:35.890327   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:35.890377   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:36.131285   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:36.256457   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:36.389402   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:36.389485   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:36.631254   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:36.756475   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:36.889676   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:36.889770   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:37.130594   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:37.255782   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:37.389667   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:37.389843   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:37.402314   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:37.630458   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:37.756709   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:37.889622   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:37.889751   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:38.131502   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:38.255754   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:38.389744   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:38.389929   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:38.631426   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:38.756664   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:38.889907   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:38.890022   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:39.131133   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:39.256407   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:39.389570   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:39.389754   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:39.403154   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:39.631540   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:39.755397   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:39.889446   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:39.889561   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:40.131439   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:40.256641   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:40.389595   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:40.389720   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:40.631576   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:40.755682   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:40.889908   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:40.889908   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:41.131837   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:41.255966   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:41.389827   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:41.390033   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:41.630811   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:41.756048   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:41.889998   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:41.890197   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:41.902733   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:42.130868   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:42.255912   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:42.389782   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:42.389921   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:42.630675   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:42.755718   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:42.889770   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:42.889947   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:43.130759   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:43.255894   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:43.389944   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:43.390117   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:43.630950   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:43.755876   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:43.890005   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:43.890166   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:43.902932   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:44.131252   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:44.256250   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:44.389173   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:44.389342   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:44.631208   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:44.756299   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:44.889232   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:44.889338   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:45.131039   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:45.255976   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:45.390040   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:45.390170   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:45.630847   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:45.755919   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:45.889907   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:45.890097   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:46.131214   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:46.256454   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:46.389498   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:46.389586   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:46.403135   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:46.631610   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:46.755561   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:46.889682   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:46.889791   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:47.131481   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:47.256989   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:47.390137   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:47.390188   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:47.631159   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:47.756189   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:47.889237   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:47.889412   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:48.131189   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:48.256545   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:48.389485   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:48.389676   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:48.631437   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:48.756417   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:48.889378   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:48.889432   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:48.903023   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:49.131472   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:49.256303   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:49.389395   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:49.389519   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:49.631344   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:49.756499   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:49.889681   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:49.889803   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:50.130318   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:50.256409   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:50.389433   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:50.389546   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:50.631350   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:50.756557   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:50.889507   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:50.889628   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:50.903164   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:51.131584   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:51.255358   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:51.389448   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:51.389543   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:51.631415   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:51.756631   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:51.889929   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:51.890188   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:52.130813   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:52.255966   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:52.390097   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:52.390235   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:52.631420   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:52.756344   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:52.889366   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:52.889597   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:53.131374   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:53.259179   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:53.389328   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:53.389381   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:53.402890   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:53.631437   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:53.756468   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:53.889712   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:53.889816   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:54.130595   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:54.255775   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:54.389841   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:54.389908   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:54.632087   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:54.755653   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:54.889760   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:54.889893   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:55.130625   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:55.255545   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:55.389901   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:55.389912   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:55.630669   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:55.755566   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:55.889473   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:55.889649   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 07:56:55.903340   15847 node_ready.go:57] node "addons-959783" has "Ready":"False" status (will retry)
	I1123 07:56:56.130603   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:56.255498   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:56.389571   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:56.389854   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:56.631503   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:56.756556   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:56.890127   15847 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 07:56:56.890157   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:56.890164   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:56.902959   15847 node_ready.go:49] node "addons-959783" is "Ready"
	I1123 07:56:56.902988   15847 node_ready.go:38] duration metric: took 40.502328698s for node "addons-959783" to be "Ready" ...
	I1123 07:56:56.903005   15847 api_server.go:52] waiting for apiserver process to appear ...
	I1123 07:56:56.903055   15847 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 07:56:56.918739   15847 api_server.go:72] duration metric: took 41.073417085s to wait for apiserver process to appear ...
	I1123 07:56:56.918762   15847 api_server.go:88] waiting for apiserver healthz status ...
	I1123 07:56:56.918783   15847 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 07:56:56.923021   15847 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 07:56:56.923749   15847 api_server.go:141] control plane version: v1.34.1
	I1123 07:56:56.923769   15847 api_server.go:131] duration metric: took 5.000989ms to wait for apiserver health ...
	I1123 07:56:56.923778   15847 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 07:56:56.926828   15847 system_pods.go:59] 20 kube-system pods found
	I1123 07:56:56.926851   15847 system_pods.go:61] "amd-gpu-device-plugin-kcdzf" [e3f0739c-033b-404d-8651-715b88a2e213] Pending
	I1123 07:56:56.926870   15847 system_pods.go:61] "coredns-66bc5c9577-bzmrl" [062b2ef0-f93f-4022-b8f0-a63c7d823974] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:56:56.926876   15847 system_pods.go:61] "csi-hostpath-attacher-0" [a977a27e-c722-4201-a0f0-a0ca8bb5f495] Pending
	I1123 07:56:56.926885   15847 system_pods.go:61] "csi-hostpath-resizer-0" [ba1e1292-a73b-43ed-a6a8-e5c5cd69eaf0] Pending
	I1123 07:56:56.926890   15847 system_pods.go:61] "csi-hostpathplugin-8skb7" [078e8b91-1aff-4ba5-b419-3e99727fa05c] Pending
	I1123 07:56:56.926896   15847 system_pods.go:61] "etcd-addons-959783" [d270e85a-ae07-4e3e-883b-b6f83d9e85f1] Running
	I1123 07:56:56.926905   15847 system_pods.go:61] "kindnet-vqst5" [2384322c-daa2-40b5-9107-b18c55e3ce5a] Running
	I1123 07:56:56.926911   15847 system_pods.go:61] "kube-apiserver-addons-959783" [49ec5d7c-f7e6-4871-a3c4-ae8b16fcfa0c] Running
	I1123 07:56:56.926922   15847 system_pods.go:61] "kube-controller-manager-addons-959783" [26215941-0128-41c5-ae74-08552252b345] Running
	I1123 07:56:56.926934   15847 system_pods.go:61] "kube-ingress-dns-minikube" [8fc836c3-712f-4578-a86e-9e5f461a0e7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:56:56.926937   15847 system_pods.go:61] "kube-proxy-lrdk2" [0e382777-1804-494e-876d-80638a083b09] Running
	I1123 07:56:56.926943   15847 system_pods.go:61] "kube-scheduler-addons-959783" [d38e1eb6-419d-4b2c-b4ea-96259ab52844] Running
	I1123 07:56:56.926950   15847 system_pods.go:61] "metrics-server-85b7d694d7-87jkk" [71097df3-1b14-4559-b01c-7084f8d00b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:56:56.926953   15847 system_pods.go:61] "nvidia-device-plugin-daemonset-gft7l" [81c12107-652e-454a-9b52-5b44ffb4e5f9] Pending
	I1123 07:56:56.926958   15847 system_pods.go:61] "registry-6b586f9694-mq8bw" [e0c7828e-fc45-45aa-b3c4-89e8cad6740e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:56:56.926964   15847 system_pods.go:61] "registry-creds-764b6fb674-5nncl" [dbe053d8-8038-4931-a819-4d425afcb649] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:56:56.926969   15847 system_pods.go:61] "registry-proxy-txmj8" [61917d5c-8217-4b89-b9e1-02789e24dd18] Pending
	I1123 07:56:56.926972   15847 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q26tv" [17e5c1bb-2c63-43a1-96f2-192bedd89a52] Pending
	I1123 07:56:56.926981   15847 system_pods.go:61] "snapshot-controller-7d9fbc56b8-smbtq" [662fd35e-b22f-4ee7-ba9e-95c62fe7d1ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:56.926990   15847 system_pods.go:61] "storage-provisioner" [d02ed26c-3769-4e72-90f8-5ea46e43c143] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:56:56.926998   15847 system_pods.go:74] duration metric: took 3.213757ms to wait for pod list to return data ...
	I1123 07:56:56.927010   15847 default_sa.go:34] waiting for default service account to be created ...
	I1123 07:56:56.928716   15847 default_sa.go:45] found service account: "default"
	I1123 07:56:56.928730   15847 default_sa.go:55] duration metric: took 1.715359ms for default service account to be created ...
	I1123 07:56:56.928738   15847 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 07:56:56.931320   15847 system_pods.go:86] 20 kube-system pods found
	I1123 07:56:56.931344   15847 system_pods.go:89] "amd-gpu-device-plugin-kcdzf" [e3f0739c-033b-404d-8651-715b88a2e213] Pending
	I1123 07:56:56.931354   15847 system_pods.go:89] "coredns-66bc5c9577-bzmrl" [062b2ef0-f93f-4022-b8f0-a63c7d823974] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:56:56.931361   15847 system_pods.go:89] "csi-hostpath-attacher-0" [a977a27e-c722-4201-a0f0-a0ca8bb5f495] Pending
	I1123 07:56:56.931366   15847 system_pods.go:89] "csi-hostpath-resizer-0" [ba1e1292-a73b-43ed-a6a8-e5c5cd69eaf0] Pending
	I1123 07:56:56.931372   15847 system_pods.go:89] "csi-hostpathplugin-8skb7" [078e8b91-1aff-4ba5-b419-3e99727fa05c] Pending
	I1123 07:56:56.931377   15847 system_pods.go:89] "etcd-addons-959783" [d270e85a-ae07-4e3e-883b-b6f83d9e85f1] Running
	I1123 07:56:56.931382   15847 system_pods.go:89] "kindnet-vqst5" [2384322c-daa2-40b5-9107-b18c55e3ce5a] Running
	I1123 07:56:56.931387   15847 system_pods.go:89] "kube-apiserver-addons-959783" [49ec5d7c-f7e6-4871-a3c4-ae8b16fcfa0c] Running
	I1123 07:56:56.931392   15847 system_pods.go:89] "kube-controller-manager-addons-959783" [26215941-0128-41c5-ae74-08552252b345] Running
	I1123 07:56:56.931402   15847 system_pods.go:89] "kube-ingress-dns-minikube" [8fc836c3-712f-4578-a86e-9e5f461a0e7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:56:56.931407   15847 system_pods.go:89] "kube-proxy-lrdk2" [0e382777-1804-494e-876d-80638a083b09] Running
	I1123 07:56:56.931413   15847 system_pods.go:89] "kube-scheduler-addons-959783" [d38e1eb6-419d-4b2c-b4ea-96259ab52844] Running
	I1123 07:56:56.931420   15847 system_pods.go:89] "metrics-server-85b7d694d7-87jkk" [71097df3-1b14-4559-b01c-7084f8d00b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:56:56.931426   15847 system_pods.go:89] "nvidia-device-plugin-daemonset-gft7l" [81c12107-652e-454a-9b52-5b44ffb4e5f9] Pending
	I1123 07:56:56.931434   15847 system_pods.go:89] "registry-6b586f9694-mq8bw" [e0c7828e-fc45-45aa-b3c4-89e8cad6740e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:56:56.931442   15847 system_pods.go:89] "registry-creds-764b6fb674-5nncl" [dbe053d8-8038-4931-a819-4d425afcb649] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:56:56.931458   15847 system_pods.go:89] "registry-proxy-txmj8" [61917d5c-8217-4b89-b9e1-02789e24dd18] Pending
	I1123 07:56:56.931464   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q26tv" [17e5c1bb-2c63-43a1-96f2-192bedd89a52] Pending
	I1123 07:56:56.931471   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-smbtq" [662fd35e-b22f-4ee7-ba9e-95c62fe7d1ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:56.931487   15847 system_pods.go:89] "storage-provisioner" [d02ed26c-3769-4e72-90f8-5ea46e43c143] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:56:56.931504   15847 retry.go:31] will retry after 238.753536ms: missing components: kube-dns
	I1123 07:56:57.131351   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:57.234005   15847 system_pods.go:86] 20 kube-system pods found
	I1123 07:56:57.234042   15847 system_pods.go:89] "amd-gpu-device-plugin-kcdzf" [e3f0739c-033b-404d-8651-715b88a2e213] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 07:56:57.234054   15847 system_pods.go:89] "coredns-66bc5c9577-bzmrl" [062b2ef0-f93f-4022-b8f0-a63c7d823974] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 07:56:57.234062   15847 system_pods.go:89] "csi-hostpath-attacher-0" [a977a27e-c722-4201-a0f0-a0ca8bb5f495] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:56:57.234073   15847 system_pods.go:89] "csi-hostpath-resizer-0" [ba1e1292-a73b-43ed-a6a8-e5c5cd69eaf0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:56:57.234093   15847 system_pods.go:89] "csi-hostpathplugin-8skb7" [078e8b91-1aff-4ba5-b419-3e99727fa05c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:56:57.234100   15847 system_pods.go:89] "etcd-addons-959783" [d270e85a-ae07-4e3e-883b-b6f83d9e85f1] Running
	I1123 07:56:57.234107   15847 system_pods.go:89] "kindnet-vqst5" [2384322c-daa2-40b5-9107-b18c55e3ce5a] Running
	I1123 07:56:57.234113   15847 system_pods.go:89] "kube-apiserver-addons-959783" [49ec5d7c-f7e6-4871-a3c4-ae8b16fcfa0c] Running
	I1123 07:56:57.234119   15847 system_pods.go:89] "kube-controller-manager-addons-959783" [26215941-0128-41c5-ae74-08552252b345] Running
	I1123 07:56:57.234126   15847 system_pods.go:89] "kube-ingress-dns-minikube" [8fc836c3-712f-4578-a86e-9e5f461a0e7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:56:57.234132   15847 system_pods.go:89] "kube-proxy-lrdk2" [0e382777-1804-494e-876d-80638a083b09] Running
	I1123 07:56:57.234138   15847 system_pods.go:89] "kube-scheduler-addons-959783" [d38e1eb6-419d-4b2c-b4ea-96259ab52844] Running
	I1123 07:56:57.234152   15847 system_pods.go:89] "metrics-server-85b7d694d7-87jkk" [71097df3-1b14-4559-b01c-7084f8d00b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:56:57.234160   15847 system_pods.go:89] "nvidia-device-plugin-daemonset-gft7l" [81c12107-652e-454a-9b52-5b44ffb4e5f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:56:57.234171   15847 system_pods.go:89] "registry-6b586f9694-mq8bw" [e0c7828e-fc45-45aa-b3c4-89e8cad6740e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:56:57.234181   15847 system_pods.go:89] "registry-creds-764b6fb674-5nncl" [dbe053d8-8038-4931-a819-4d425afcb649] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:56:57.234190   15847 system_pods.go:89] "registry-proxy-txmj8" [61917d5c-8217-4b89-b9e1-02789e24dd18] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:56:57.234198   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q26tv" [17e5c1bb-2c63-43a1-96f2-192bedd89a52] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:57.234208   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-smbtq" [662fd35e-b22f-4ee7-ba9e-95c62fe7d1ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:57.234216   15847 system_pods.go:89] "storage-provisioner" [d02ed26c-3769-4e72-90f8-5ea46e43c143] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 07:56:57.234238   15847 retry.go:31] will retry after 314.436306ms: missing components: kube-dns
	I1123 07:56:57.321481   15847 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 07:56:57.321510   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:57.389459   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:57.389581   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:57.554093   15847 system_pods.go:86] 20 kube-system pods found
	I1123 07:56:57.554129   15847 system_pods.go:89] "amd-gpu-device-plugin-kcdzf" [e3f0739c-033b-404d-8651-715b88a2e213] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 07:56:57.554138   15847 system_pods.go:89] "coredns-66bc5c9577-bzmrl" [062b2ef0-f93f-4022-b8f0-a63c7d823974] Running
	I1123 07:56:57.554148   15847 system_pods.go:89] "csi-hostpath-attacher-0" [a977a27e-c722-4201-a0f0-a0ca8bb5f495] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 07:56:57.554159   15847 system_pods.go:89] "csi-hostpath-resizer-0" [ba1e1292-a73b-43ed-a6a8-e5c5cd69eaf0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 07:56:57.554171   15847 system_pods.go:89] "csi-hostpathplugin-8skb7" [078e8b91-1aff-4ba5-b419-3e99727fa05c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 07:56:57.554182   15847 system_pods.go:89] "etcd-addons-959783" [d270e85a-ae07-4e3e-883b-b6f83d9e85f1] Running
	I1123 07:56:57.554189   15847 system_pods.go:89] "kindnet-vqst5" [2384322c-daa2-40b5-9107-b18c55e3ce5a] Running
	I1123 07:56:57.554196   15847 system_pods.go:89] "kube-apiserver-addons-959783" [49ec5d7c-f7e6-4871-a3c4-ae8b16fcfa0c] Running
	I1123 07:56:57.554202   15847 system_pods.go:89] "kube-controller-manager-addons-959783" [26215941-0128-41c5-ae74-08552252b345] Running
	I1123 07:56:57.554216   15847 system_pods.go:89] "kube-ingress-dns-minikube" [8fc836c3-712f-4578-a86e-9e5f461a0e7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 07:56:57.554227   15847 system_pods.go:89] "kube-proxy-lrdk2" [0e382777-1804-494e-876d-80638a083b09] Running
	I1123 07:56:57.554233   15847 system_pods.go:89] "kube-scheduler-addons-959783" [d38e1eb6-419d-4b2c-b4ea-96259ab52844] Running
	I1123 07:56:57.554240   15847 system_pods.go:89] "metrics-server-85b7d694d7-87jkk" [71097df3-1b14-4559-b01c-7084f8d00b53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 07:56:57.554248   15847 system_pods.go:89] "nvidia-device-plugin-daemonset-gft7l" [81c12107-652e-454a-9b52-5b44ffb4e5f9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 07:56:57.554256   15847 system_pods.go:89] "registry-6b586f9694-mq8bw" [e0c7828e-fc45-45aa-b3c4-89e8cad6740e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 07:56:57.554264   15847 system_pods.go:89] "registry-creds-764b6fb674-5nncl" [dbe053d8-8038-4931-a819-4d425afcb649] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 07:56:57.554271   15847 system_pods.go:89] "registry-proxy-txmj8" [61917d5c-8217-4b89-b9e1-02789e24dd18] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 07:56:57.554281   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q26tv" [17e5c1bb-2c63-43a1-96f2-192bedd89a52] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:57.554294   15847 system_pods.go:89] "snapshot-controller-7d9fbc56b8-smbtq" [662fd35e-b22f-4ee7-ba9e-95c62fe7d1ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 07:56:57.554300   15847 system_pods.go:89] "storage-provisioner" [d02ed26c-3769-4e72-90f8-5ea46e43c143] Running
	I1123 07:56:57.554315   15847 system_pods.go:126] duration metric: took 625.571024ms to wait for k8s-apps to be running ...
	I1123 07:56:57.554325   15847 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 07:56:57.554383   15847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 07:56:57.570040   15847 system_svc.go:56] duration metric: took 15.707782ms WaitForService to wait for kubelet
	I1123 07:56:57.570071   15847 kubeadm.go:587] duration metric: took 41.724751856s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 07:56:57.570092   15847 node_conditions.go:102] verifying NodePressure condition ...
	I1123 07:56:57.572825   15847 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 07:56:57.572854   15847 node_conditions.go:123] node cpu capacity is 8
	I1123 07:56:57.572872   15847 node_conditions.go:105] duration metric: took 2.773352ms to run NodePressure ...
	I1123 07:56:57.572886   15847 start.go:242] waiting for startup goroutines ...
	I1123 07:56:57.652823   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:57.756494   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:57.890178   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:57.890260   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:58.132187   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:58.257309   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:58.389676   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:58.389758   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:58.631367   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:58.757966   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:58.890054   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:58.890226   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:59.132867   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:59.257959   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:59.391788   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:59.391950   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:56:59.632122   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:56:59.757117   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:56:59.890763   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:56:59.890910   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:00.131379   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:00.257431   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:00.389892   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:00.390043   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:00.631263   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:00.756457   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:00.889580   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:00.889612   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:01.131363   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:01.257517   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:01.393314   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:01.393912   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:01.632166   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:01.758085   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:01.890434   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:01.890517   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:02.132525   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:02.256530   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:02.442187   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:02.442199   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:02.631768   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:02.756578   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:02.890363   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:02.890394   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:03.131274   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:03.256354   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:03.390484   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:03.390517   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:03.631122   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:03.756636   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:03.889951   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:03.890119   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:04.131930   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:04.257024   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:04.390811   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:04.390876   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:04.632289   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:04.763762   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:04.890672   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:04.890752   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:05.131454   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:05.257599   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:05.390363   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:05.390543   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:05.632295   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:05.757096   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:05.890842   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:05.890881   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:06.132043   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:06.258352   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:06.485525   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:06.485553   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:06.653802   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:06.756627   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:06.890271   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:06.890448   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:07.132137   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:07.256485   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:07.390720   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:07.390948   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:07.631145   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:07.757043   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:07.890390   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:07.890397   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:08.131108   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:08.256515   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:08.389710   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:08.389831   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:08.632670   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:08.756730   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:08.890732   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:08.891097   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:09.132280   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:09.257179   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:09.390855   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:09.390936   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:09.631833   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:09.756959   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:09.891097   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:09.891189   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:10.132255   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:10.257086   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:10.390501   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:10.390587   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:10.631044   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:10.756277   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:10.889276   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:10.889340   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:11.132161   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:11.257059   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:11.390719   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:11.390774   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:11.631047   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:11.757249   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:11.889977   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:11.890024   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:12.132146   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:12.257471   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:12.390818   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:12.390921   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:12.631812   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:12.756877   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:12.890622   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:12.890820   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:13.131323   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:13.257042   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:13.390580   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:13.390645   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:13.631712   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:13.756785   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:13.890133   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:13.890204   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:14.131611   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:14.256381   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:14.391421   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:14.403022   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:14.631290   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:14.757345   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:14.890026   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:14.890098   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:15.132252   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:15.256997   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:15.390198   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:15.390419   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:15.630977   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:15.756543   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:15.889833   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:15.889908   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:16.131803   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:16.256556   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:16.391586   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:16.391699   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:16.631334   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:16.756674   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:16.889853   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:16.889898   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:17.131287   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:17.256578   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:17.389834   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:17.390044   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:17.632102   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:17.757139   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:17.891476   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:17.891647   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:18.133218   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:18.258783   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:18.390924   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:18.390982   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:18.632260   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:18.757278   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:18.889908   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:18.889947   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:19.131218   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:19.256313   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:19.390226   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:19.390394   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:19.632748   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:19.756418   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:19.890229   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:19.890293   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:20.130992   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:20.256640   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:20.390055   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:20.390086   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:20.631882   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:20.756440   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:20.889588   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:20.889598   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:21.131625   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:21.256489   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:21.389896   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 07:57:21.389986   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:21.631355   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:21.758585   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:21.890472   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:21.891326   15847 kapi.go:107] duration metric: took 1m4.50426627s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 07:57:22.132003   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:22.257031   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:22.390164   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:22.632297   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:22.756972   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:22.890598   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:23.131091   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:23.256940   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:23.390169   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:23.631622   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:23.756421   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:23.890309   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:24.132413   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:24.257713   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:24.389969   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:24.631306   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:24.757192   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:24.890856   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:25.131336   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:25.257155   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:25.389643   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:25.631328   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:25.757889   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:25.890752   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:26.131521   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:26.257653   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:26.390431   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:26.632170   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:26.757134   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:26.890359   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:27.131770   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:27.256248   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 07:57:27.390407   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:27.631529   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:27.755553   15847 kapi.go:107] duration metric: took 1m10.002039328s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 07:57:27.889489   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:28.130787   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:28.390632   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:28.630765   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:28.890504   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:29.131283   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:29.392382   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:29.633393   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:29.891572   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:30.132540   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:30.390657   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:30.645374   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:30.890778   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:31.130839   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:31.390423   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:31.631438   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:31.890837   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:32.132095   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:32.389560   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:32.631088   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:32.889765   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:33.131899   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:33.390038   15847 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 07:57:33.631806   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:33.890762   15847 kapi.go:107] duration metric: took 1m16.503693864s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 07:57:34.130776   15847 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 07:57:34.631595   15847 kapi.go:107] duration metric: took 1m10.503147577s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 07:57:34.633065   15847 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-959783 cluster.
	I1123 07:57:34.634065   15847 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 07:57:34.635111   15847 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 07:57:34.636211   15847 out.go:179] * Enabled addons: cloud-spanner, inspektor-gadget, registry-creds, amd-gpu-device-plugin, storage-provisioner, metrics-server, yakd, default-storageclass, nvidia-device-plugin, ingress-dns, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1123 07:57:34.637196   15847 addons.go:530] duration metric: took 1m18.791824473s for enable addons: enabled=[cloud-spanner inspektor-gadget registry-creds amd-gpu-device-plugin storage-provisioner metrics-server yakd default-storageclass nvidia-device-plugin ingress-dns volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1123 07:57:34.637235   15847 start.go:247] waiting for cluster config update ...
	I1123 07:57:34.637263   15847 start.go:256] writing updated cluster config ...
	I1123 07:57:34.637488   15847 ssh_runner.go:195] Run: rm -f paused
	I1123 07:57:34.641150   15847 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 07:57:34.643441   15847 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bzmrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.646809   15847 pod_ready.go:94] pod "coredns-66bc5c9577-bzmrl" is "Ready"
	I1123 07:57:34.646826   15847 pod_ready.go:86] duration metric: took 3.366862ms for pod "coredns-66bc5c9577-bzmrl" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.648463   15847 pod_ready.go:83] waiting for pod "etcd-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.651414   15847 pod_ready.go:94] pod "etcd-addons-959783" is "Ready"
	I1123 07:57:34.651429   15847 pod_ready.go:86] duration metric: took 2.949725ms for pod "etcd-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.652985   15847 pod_ready.go:83] waiting for pod "kube-apiserver-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.655831   15847 pod_ready.go:94] pod "kube-apiserver-addons-959783" is "Ready"
	I1123 07:57:34.655847   15847 pod_ready.go:86] duration metric: took 2.847621ms for pod "kube-apiserver-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:34.657321   15847 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:35.044544   15847 pod_ready.go:94] pod "kube-controller-manager-addons-959783" is "Ready"
	I1123 07:57:35.044570   15847 pod_ready.go:86] duration metric: took 387.233108ms for pod "kube-controller-manager-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:35.244177   15847 pod_ready.go:83] waiting for pod "kube-proxy-lrdk2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:35.644753   15847 pod_ready.go:94] pod "kube-proxy-lrdk2" is "Ready"
	I1123 07:57:35.644776   15847 pod_ready.go:86] duration metric: took 400.575033ms for pod "kube-proxy-lrdk2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:35.844182   15847 pod_ready.go:83] waiting for pod "kube-scheduler-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:36.245131   15847 pod_ready.go:94] pod "kube-scheduler-addons-959783" is "Ready"
	I1123 07:57:36.245155   15847 pod_ready.go:86] duration metric: took 400.948181ms for pod "kube-scheduler-addons-959783" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 07:57:36.245167   15847 pod_ready.go:40] duration metric: took 1.603994726s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 07:57:36.295158   15847 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 07:57:36.297249   15847 out.go:179] * Done! kubectl is now configured to use "addons-959783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 07:57:55 addons-959783 crio[773]: time="2025-11-23T07:57:55.720062126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 07:57:55 addons-959783 crio[773]: time="2025-11-23T07:57:55.726340057Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:af31e9821fecd90ac8e183cbf79952674f2192ef6a04e5e3b227fe30a2cdaf72 UID:9713a485-d528-4f08-9f66-96c3e5b2f714 NetNS:/var/run/netns/27a89b5b-881c-47d0-aa4c-af8da4052e48 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000520a78}] Aliases:map[]}"
	Nov 23 07:57:55 addons-959783 crio[773]: time="2025-11-23T07:57:55.726371295Z" level=info msg="Adding pod default_registry-test to CNI network \"kindnet\" (type=ptp)"
	Nov 23 07:57:55 addons-959783 crio[773]: time="2025-11-23T07:57:55.735872137Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:af31e9821fecd90ac8e183cbf79952674f2192ef6a04e5e3b227fe30a2cdaf72 UID:9713a485-d528-4f08-9f66-96c3e5b2f714 NetNS:/var/run/netns/27a89b5b-881c-47d0-aa4c-af8da4052e48 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000520a78}] Aliases:map[]}"
	Nov 23 07:57:55 addons-959783 crio[773]: time="2025-11-23T07:57:55.736007515Z" level=info msg="Checking pod default_registry-test for CNI network kindnet (type=ptp)"
	Nov 23 07:57:55 addons-959783 crio[773]: time="2025-11-23T07:57:55.73685974Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 07:57:55 addons-959783 crio[773]: time="2025-11-23T07:57:55.737578508Z" level=info msg="Ran pod sandbox af31e9821fecd90ac8e183cbf79952674f2192ef6a04e5e3b227fe30a2cdaf72 with infra container: default/registry-test/POD" id=f66985ed-d08d-4c03-b876-29b3bd736184 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 07:57:55 addons-959783 crio[773]: time="2025-11-23T07:57:55.738645867Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:latest" id=ac99a3b8-b412-4ec1-a12f-08ecda55f8fe name=/runtime.v1.ImageService/PullImage
	Nov 23 07:57:55 addons-959783 crio[773]: time="2025-11-23T07:57:55.739972689Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:latest\""
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.322059587Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee" id=ac99a3b8-b412-4ec1-a12f-08ecda55f8fe name=/runtime.v1.ImageService/PullImage
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.32255897Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:latest" id=4d3a7840-366e-4c33-92d5-e86a1938605a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.32381667Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox" id=5a129fc0-1b07-4e87-97d7-a50c7466a1c1 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.326870299Z" level=info msg="Creating container: default/registry-test/registry-test" id=05e6eb18-0676-4d68-b30d-f1b547b01db7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.326976357Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.331960151Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.332389681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.360228781Z" level=info msg="Created container b27c7cb4254a4b0be35e8ad3e29ad67d4e26eefeab031d8522f3dbebc76a75c4: default/registry-test/registry-test" id=05e6eb18-0676-4d68-b30d-f1b547b01db7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.360680436Z" level=info msg="Starting container: b27c7cb4254a4b0be35e8ad3e29ad67d4e26eefeab031d8522f3dbebc76a75c4" id=1dcee01f-acfa-4625-a51a-709d50971c03 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.362339946Z" level=info msg="Started container" PID=6729 containerID=b27c7cb4254a4b0be35e8ad3e29ad67d4e26eefeab031d8522f3dbebc76a75c4 description=default/registry-test/registry-test id=1dcee01f-acfa-4625-a51a-709d50971c03 name=/runtime.v1.RuntimeService/StartContainer sandboxID=af31e9821fecd90ac8e183cbf79952674f2192ef6a04e5e3b227fe30a2cdaf72
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.620512879Z" level=info msg="Removing container: 96582ed9e5732a084eb6a0d842558b8d91e8e6a3766e74a988cad3c1fadcd213" id=ebaf0827-de00-4aa3-8d02-ff6242f31b56 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 07:57:56 addons-959783 crio[773]: time="2025-11-23T07:57:56.629552151Z" level=info msg="Removed container 96582ed9e5732a084eb6a0d842558b8d91e8e6a3766e74a988cad3c1fadcd213: local-path-storage/helper-pod-delete-pvc-eb41d53f-743e-4287-8190-205dfc85238e/helper-pod" id=ebaf0827-de00-4aa3-8d02-ff6242f31b56 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 07:57:57 addons-959783 crio[773]: time="2025-11-23T07:57:57.625307545Z" level=info msg="Stopping pod sandbox: af31e9821fecd90ac8e183cbf79952674f2192ef6a04e5e3b227fe30a2cdaf72" id=446fb02e-e77e-403e-9e8e-209eb36de0a1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 07:57:57 addons-959783 crio[773]: time="2025-11-23T07:57:57.625534124Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:af31e9821fecd90ac8e183cbf79952674f2192ef6a04e5e3b227fe30a2cdaf72 UID:9713a485-d528-4f08-9f66-96c3e5b2f714 NetNS:/var/run/netns/27a89b5b-881c-47d0-aa4c-af8da4052e48 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000520ca8}] Aliases:map[]}"
	Nov 23 07:57:57 addons-959783 crio[773]: time="2025-11-23T07:57:57.625656753Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Nov 23 07:57:57 addons-959783 crio[773]: time="2025-11-23T07:57:57.642999908Z" level=info msg="Stopped pod sandbox: af31e9821fecd90ac8e183cbf79952674f2192ef6a04e5e3b227fe30a2cdaf72" id=446fb02e-e77e-403e-9e8e-209eb36de0a1 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	b27c7cb4254a4       gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee                                          1 second ago         Exited              registry-test                            0                   af31e9821fecd       registry-test                                                default
	6861720fb4c96       docker.io/library/busybox@sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737                                            7 seconds ago        Exited              busybox                                  0                   0483e879bd0bc       test-local-path                                              default
	364b183176326       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            12 seconds ago       Exited              helper-pod                               0                   cf6b59535fb4f       helper-pod-create-pvc-eb41d53f-743e-4287-8190-205dfc85238e   local-path-storage
	15eb9a2438d49       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          19 seconds ago       Running             busybox                                  0                   b26ae315b68c4       busybox                                                      default
	67e62a3782dbd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 24 seconds ago       Running             gcp-auth                                 0                   7d78df10b5bd1       gcp-auth-78565c9fb4-5cjfg                                    gcp-auth
	ea37a4f6d1d21       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             25 seconds ago       Running             controller                               0                   566ae29283c7a       ingress-nginx-controller-6c8bf45fb-k5rdb                     ingress-nginx
	4f7e9034a78fd       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             29 seconds ago       Exited              patch                                    2                   6e0a3fc9daa88       ingress-nginx-admission-patch-zf9fd                          ingress-nginx
	444c2f1efdf59       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          31 seconds ago       Running             csi-snapshotter                          0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                                     kube-system
	87f89d7ecfbd2       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          32 seconds ago       Running             csi-provisioner                          0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                                     kube-system
	85507fd988591       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            33 seconds ago       Running             liveness-probe                           0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                                     kube-system
	de6c01c726f84       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           33 seconds ago       Running             hostpath                                 0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                                     kube-system
	e547e8711c1e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            34 seconds ago       Running             gadget                                   0                   ae5ee4fcabc57       gadget-jsjqv                                                 gadget
	d835cb4f7791d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                36 seconds ago       Running             node-driver-registrar                    0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                                     kube-system
	8cc8cf367fdab       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              37 seconds ago       Running             registry-proxy                           0                   7dc4e51da0ccd       registry-proxy-txmj8                                         kube-system
	ef34710dda6e2       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     38 seconds ago       Running             nvidia-device-plugin-ctr                 0                   ec1aab8a3918e       nvidia-device-plugin-daemonset-gft7l                         kube-system
	3b156f686aa9b       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     41 seconds ago       Running             amd-gpu-device-plugin                    0                   a8afc61ab0fdb       amd-gpu-device-plugin-kcdzf                                  kube-system
	96f3ecfd93122       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   42 seconds ago       Exited              patch                                    0                   9b3eaa72e965c       gcp-auth-certs-patch-h7m8m                                   gcp-auth
	40e801de9fbe0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   42 seconds ago       Running             csi-external-health-monitor-controller   0                   9f61fead7e5f9       csi-hostpathplugin-8skb7                                     kube-system
	fc199ed96e024       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      43 seconds ago       Running             volume-snapshot-controller               0                   e35848eff9303       snapshot-controller-7d9fbc56b8-q26tv                         kube-system
	ca4ffed1fb95f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   43 seconds ago       Exited              create                                   0                   8acbc451cfc3a       ingress-nginx-admission-create-cjxlj                         ingress-nginx
	01ca798c81384       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      44 seconds ago       Running             volume-snapshot-controller               0                   5801501067e61       snapshot-controller-7d9fbc56b8-smbtq                         kube-system
	06b8b721d44f0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   45 seconds ago       Exited              create                                   0                   08e823b5759ac       gcp-auth-certs-create-7rm9w                                  gcp-auth
	1765f478745cd       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              45 seconds ago       Running             csi-resizer                              0                   dd5990dbd8d58       csi-hostpath-resizer-0                                       kube-system
	3ab2ba7b9cdf5       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             46 seconds ago       Running             local-path-provisioner                   0                   ae84910c291a0       local-path-provisioner-648f6765c9-tznjx                      local-path-storage
	9e8da838b979e       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              47 seconds ago       Running             yakd                                     0                   5df602ae31fb1       yakd-dashboard-5ff678cb9-tx6dk                               yakd-dashboard
	651380a78efa5       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             50 seconds ago       Running             csi-attacher                             0                   a89ce5b4c9ee0       csi-hostpath-attacher-0                                      kube-system
	3a1961ad35159       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           50 seconds ago       Running             registry                                 0                   fbb5cc422ecf5       registry-6b586f9694-mq8bw                                    kube-system
	8fb1acf83526f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               52 seconds ago       Running             minikube-ingress-dns                     0                   2ec6a6b0d0f7e       kube-ingress-dns-minikube                                    kube-system
	c38e4541e0dba       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               57 seconds ago       Running             cloud-spanner-emulator                   0                   b29330a812a18       cloud-spanner-emulator-5bdddb765-sfxnv                       default
	dcae7b911caf9       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        59 seconds ago       Running             metrics-server                           0                   9fa02050ac2df       metrics-server-85b7d694d7-87jkk                              kube-system
	3d968d545ec05       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   0270667f50a9e       coredns-66bc5c9577-bzmrl                                     kube-system
	8571ca641d958       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   973ebb9d087e5       storage-provisioner                                          kube-system
	792f2602e690a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   fe52c087cd5f6       kube-proxy-lrdk2                                             kube-system
	adf924f9387e3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   62a55cded7e42       kindnet-vqst5                                                kube-system
	6e081d40a1a88       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   4590791df805c       kube-apiserver-addons-959783                                 kube-system
	e05878f9bf96b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   3957dc6faa309       kube-controller-manager-addons-959783                        kube-system
	9e76f19262eee       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   06450e87ee6a2       kube-scheduler-addons-959783                                 kube-system
	f6524b0b95cff       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   3d1b46a0c25e6       etcd-addons-959783                                           kube-system
	
	
	==> coredns [3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b] <==
	[INFO] 10.244.0.16:50366 - 8022 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000136162s
	[INFO] 10.244.0.16:40664 - 47138 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000074532s
	[INFO] 10.244.0.16:40664 - 47445 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000129758s
	[INFO] 10.244.0.16:53574 - 61099 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00005656s
	[INFO] 10.244.0.16:53574 - 60829 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000088091s
	[INFO] 10.244.0.16:40379 - 22445 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000057505s
	[INFO] 10.244.0.16:40379 - 22050 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000102625s
	[INFO] 10.244.0.16:33468 - 61235 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000131773s
	[INFO] 10.244.0.16:33468 - 61031 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00017241s
	[INFO] 10.244.0.22:59363 - 14845 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000162186s
	[INFO] 10.244.0.22:40976 - 17071 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000237556s
	[INFO] 10.244.0.22:44701 - 25692 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000141097s
	[INFO] 10.244.0.22:34135 - 43124 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154621s
	[INFO] 10.244.0.22:37438 - 16843 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119133s
	[INFO] 10.244.0.22:35950 - 59335 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133151s
	[INFO] 10.244.0.22:46525 - 43799 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.006194202s
	[INFO] 10.244.0.22:35559 - 64489 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.006330659s
	[INFO] 10.244.0.22:45875 - 11624 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006302281s
	[INFO] 10.244.0.22:40923 - 45890 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006529417s
	[INFO] 10.244.0.22:58136 - 10529 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004091138s
	[INFO] 10.244.0.22:39074 - 2465 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006063062s
	[INFO] 10.244.0.22:35645 - 24213 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001928691s
	[INFO] 10.244.0.22:36683 - 49159 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002400061s
	[INFO] 10.244.0.27:53472 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000243119s
	[INFO] 10.244.0.27:42339 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158377s
	
	
	==> describe nodes <==
	Name:               addons-959783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-959783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=addons-959783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T07_56_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-959783
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-959783"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 07:56:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-959783
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 07:57:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 07:57:41 +0000   Sun, 23 Nov 2025 07:56:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 07:57:41 +0000   Sun, 23 Nov 2025 07:56:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 07:57:41 +0000   Sun, 23 Nov 2025 07:56:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 07:57:41 +0000   Sun, 23 Nov 2025 07:56:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-959783
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                7975106a-c6f5-487f-a4ee-660505127c74
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  default                     cloud-spanner-emulator-5bdddb765-sfxnv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  gadget                      gadget-jsjqv                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  gcp-auth                    gcp-auth-78565c9fb4-5cjfg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-k5rdb    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         101s
	  kube-system                 amd-gpu-device-plugin-kcdzf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 coredns-66bc5c9577-bzmrl                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 csi-hostpathplugin-8skb7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 etcd-addons-959783                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-vqst5                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-addons-959783                250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-addons-959783       200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-lrdk2                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-addons-959783                100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 metrics-server-85b7d694d7-87jkk             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         101s
	  kube-system                 nvidia-device-plugin-daemonset-gft7l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 registry-6b586f9694-mq8bw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 registry-creds-764b6fb674-5nncl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 registry-proxy-txmj8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 snapshot-controller-7d9fbc56b8-q26tv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 snapshot-controller-7d9fbc56b8-smbtq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  local-path-storage          local-path-provisioner-648f6765c9-tznjx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-tx6dk              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node addons-959783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node addons-959783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x8 over 113s)  kubelet          Node addons-959783 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node addons-959783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node addons-959783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node addons-959783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           104s                 node-controller  Node addons-959783 event: Registered Node addons-959783 in Controller
	  Normal  NodeReady                62s                  kubelet          Node addons-959783 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.366223] i8042: Warning: Keylock active
	[  +0.011161] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.483510] block sda: the capability attribute has been deprecated.
	[  +0.079858] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024030] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.151122] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea] <==
	{"level":"warn","ts":"2025-11-23T07:56:07.282195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.287581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.296434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.305204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.310605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.317793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.323489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.329914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.335580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.342967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.348835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.355089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.362075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.369216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.376269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.400809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.406371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.411982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:07.459248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:18.133526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:44.844784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:44.850457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T07:56:44.874148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56060","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T07:57:06.483825Z","caller":"traceutil/trace.go:172","msg":"trace[1385742835] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"120.000658ms","start":"2025-11-23T07:57:06.363806Z","end":"2025-11-23T07:57:06.483807Z","steps":["trace[1385742835] 'process raft request'  (duration: 119.841063ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T07:57:30.644401Z","caller":"traceutil/trace.go:172","msg":"trace[1414903627] transaction","detail":"{read_only:false; response_revision:1206; number_of_response:1; }","duration":"136.191247ms","start":"2025-11-23T07:57:30.508195Z","end":"2025-11-23T07:57:30.644386Z","steps":["trace[1414903627] 'process raft request'  (duration: 136.099241ms)"],"step_count":1}
	
	
	==> gcp-auth [67e62a3782dbdac8d36f038c0536bbe3746fe321e6bd7ea94fa66be1e8722d40] <==
	2025/11/23 07:57:33 GCP Auth Webhook started!
	2025/11/23 07:57:36 Ready to marshal response ...
	2025/11/23 07:57:36 Ready to write response ...
	2025/11/23 07:57:36 Ready to marshal response ...
	2025/11/23 07:57:36 Ready to write response ...
	2025/11/23 07:57:36 Ready to marshal response ...
	2025/11/23 07:57:36 Ready to write response ...
	2025/11/23 07:57:44 Ready to marshal response ...
	2025/11/23 07:57:44 Ready to write response ...
	2025/11/23 07:57:44 Ready to marshal response ...
	2025/11/23 07:57:44 Ready to write response ...
	2025/11/23 07:57:54 Ready to marshal response ...
	2025/11/23 07:57:54 Ready to write response ...
	2025/11/23 07:57:55 Ready to marshal response ...
	2025/11/23 07:57:55 Ready to write response ...
	
	
	==> kernel <==
	 07:57:58 up 40 min,  0 user,  load average: 2.02, 0.85, 0.32
	Linux addons-959783 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed] <==
	I1123 07:56:16.420670       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 07:56:16.420716       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 07:56:16.420729       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 07:56:16.422536       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 07:56:46.420914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 07:56:46.421849       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 07:56:46.421850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 07:56:46.423026       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 07:56:47.720968       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 07:56:47.720996       1 metrics.go:72] Registering metrics
	I1123 07:56:47.721057       1 controller.go:711] "Syncing nftables rules"
	I1123 07:56:56.425756       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:56:56.425792       1 main.go:301] handling current node
	I1123 07:57:06.420872       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:57:06.420914       1 main.go:301] handling current node
	I1123 07:57:16.420209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:57:16.420251       1 main.go:301] handling current node
	I1123 07:57:26.420946       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:57:26.420984       1 main.go:301] handling current node
	I1123 07:57:36.420774       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:57:36.420801       1 main.go:301] handling current node
	I1123 07:57:46.420827       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:57:46.420862       1 main.go:301] handling current node
	I1123 07:57:56.420485       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 07:57:56.420520       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527] <==
	W1123 07:57:09.354149       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 07:57:09.354227       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 07:57:09.354744       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.157.143:443: connect: connection refused" logger="UnhandledError"
	E1123 07:57:09.356266       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.157.143:443: connect: connection refused" logger="UnhandledError"
	W1123 07:57:10.354926       1 handler_proxy.go:99] no RequestInfo found in the context
	W1123 07:57:10.354926       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 07:57:10.355002       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1123 07:57:10.355016       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1123 07:57:10.355015       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1123 07:57:10.356125       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1123 07:57:14.365384       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 07:57:14.365431       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 07:57:14.365498       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.157.143:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1123 07:57:14.372830       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1123 07:57:43.921263       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34954: use of closed network connection
	E1123 07:57:44.057487       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34974: use of closed network connection
	
	
	==> kube-controller-manager [e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779] <==
	I1123 07:56:14.831855       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 07:56:14.831837       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 07:56:14.831894       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 07:56:14.832044       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 07:56:14.832045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 07:56:14.832046       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 07:56:14.832131       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 07:56:14.832334       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 07:56:14.832334       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 07:56:14.832449       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 07:56:14.833218       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 07:56:14.833227       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 07:56:14.833329       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 07:56:14.833678       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 07:56:14.835881       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 07:56:14.836016       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 07:56:14.850273       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1123 07:56:44.839841       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1123 07:56:44.839954       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 07:56:44.840003       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 07:56:44.859009       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1123 07:56:44.862152       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 07:56:44.940697       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 07:56:44.962876       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 07:56:59.787014       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687] <==
	I1123 07:56:15.965506       1 server_linux.go:53] "Using iptables proxy"
	I1123 07:56:16.163839       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 07:56:16.265050       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 07:56:16.268811       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 07:56:16.272027       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 07:56:16.521615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 07:56:16.523741       1 server_linux.go:132] "Using iptables Proxier"
	I1123 07:56:16.698348       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 07:56:16.716566       1 server.go:527] "Version info" version="v1.34.1"
	I1123 07:56:16.716908       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 07:56:16.718799       1 config.go:200] "Starting service config controller"
	I1123 07:56:16.718811       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 07:56:16.719216       1 config.go:106] "Starting endpoint slice config controller"
	I1123 07:56:16.719228       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 07:56:16.719244       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 07:56:16.719249       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 07:56:16.719469       1 config.go:309] "Starting node config controller"
	I1123 07:56:16.719477       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 07:56:16.719484       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 07:56:16.819564       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 07:56:16.824755       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 07:56:16.824808       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8] <==
	E1123 07:56:07.842375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 07:56:07.842421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 07:56:07.842487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 07:56:07.842513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 07:56:07.842562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 07:56:07.842552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 07:56:07.842633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 07:56:07.842673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 07:56:07.842704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 07:56:07.842721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 07:56:07.842735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 07:56:07.842774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 07:56:07.842796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 07:56:07.842803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 07:56:07.842852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 07:56:07.842886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 07:56:08.715648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 07:56:08.727450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 07:56:08.767422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 07:56:08.813247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 07:56:08.815064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 07:56:08.881431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 07:56:08.908345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 07:56:08.978788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1123 07:56:10.839632       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 07:57:54 addons-959783 kubelet[1301]: I1123 07:57:54.119359    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5e2bd4bf-ed32-453b-9529-69c50b814ed5-data\") pod \"helper-pod-delete-pvc-eb41d53f-743e-4287-8190-205dfc85238e\" (UID: \"5e2bd4bf-ed32-453b-9529-69c50b814ed5\") " pod="local-path-storage/helper-pod-delete-pvc-eb41d53f-743e-4287-8190-205dfc85238e"
	Nov 23 07:57:54 addons-959783 kubelet[1301]: I1123 07:57:54.212088    1301 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f97443-2bda-4594-a803-b55f6ca5c283" path="/var/lib/kubelet/pods/17f97443-2bda-4594-a803-b55f6ca5c283/volumes"
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.528297    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9713a485-d528-4f08-9f66-96c3e5b2f714-gcp-creds\") pod \"registry-test\" (UID: \"9713a485-d528-4f08-9f66-96c3e5b2f714\") " pod="default/registry-test"
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.528356    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kslp6\" (UniqueName: \"kubernetes.io/projected/9713a485-d528-4f08-9f66-96c3e5b2f714-kube-api-access-kslp6\") pod \"registry-test\" (UID: \"9713a485-d528-4f08-9f66-96c3e5b2f714\") " pod="default/registry-test"
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.730286    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5e2bd4bf-ed32-453b-9529-69c50b814ed5-gcp-creds\") pod \"5e2bd4bf-ed32-453b-9529-69c50b814ed5\" (UID: \"5e2bd4bf-ed32-453b-9529-69c50b814ed5\") "
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.730333    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5e2bd4bf-ed32-453b-9529-69c50b814ed5-data\") pod \"5e2bd4bf-ed32-453b-9529-69c50b814ed5\" (UID: \"5e2bd4bf-ed32-453b-9529-69c50b814ed5\") "
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.730371    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5e2bd4bf-ed32-453b-9529-69c50b814ed5-script\") pod \"5e2bd4bf-ed32-453b-9529-69c50b814ed5\" (UID: \"5e2bd4bf-ed32-453b-9529-69c50b814ed5\") "
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.730395    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcl4p\" (UniqueName: \"kubernetes.io/projected/5e2bd4bf-ed32-453b-9529-69c50b814ed5-kube-api-access-fcl4p\") pod \"5e2bd4bf-ed32-453b-9529-69c50b814ed5\" (UID: \"5e2bd4bf-ed32-453b-9529-69c50b814ed5\") "
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.730393    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2bd4bf-ed32-453b-9529-69c50b814ed5-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5e2bd4bf-ed32-453b-9529-69c50b814ed5" (UID: "5e2bd4bf-ed32-453b-9529-69c50b814ed5"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.730413    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e2bd4bf-ed32-453b-9529-69c50b814ed5-data" (OuterVolumeSpecName: "data") pod "5e2bd4bf-ed32-453b-9529-69c50b814ed5" (UID: "5e2bd4bf-ed32-453b-9529-69c50b814ed5"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.730588    1301 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5e2bd4bf-ed32-453b-9529-69c50b814ed5-gcp-creds\") on node \"addons-959783\" DevicePath \"\""
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.730618    1301 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5e2bd4bf-ed32-453b-9529-69c50b814ed5-data\") on node \"addons-959783\" DevicePath \"\""
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.730785    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2bd4bf-ed32-453b-9529-69c50b814ed5-script" (OuterVolumeSpecName: "script") pod "5e2bd4bf-ed32-453b-9529-69c50b814ed5" (UID: "5e2bd4bf-ed32-453b-9529-69c50b814ed5"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.732652    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2bd4bf-ed32-453b-9529-69c50b814ed5-kube-api-access-fcl4p" (OuterVolumeSpecName: "kube-api-access-fcl4p") pod "5e2bd4bf-ed32-453b-9529-69c50b814ed5" (UID: "5e2bd4bf-ed32-453b-9529-69c50b814ed5"). InnerVolumeSpecName "kube-api-access-fcl4p". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.831037    1301 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5e2bd4bf-ed32-453b-9529-69c50b814ed5-script\") on node \"addons-959783\" DevicePath \"\""
	Nov 23 07:57:55 addons-959783 kubelet[1301]: I1123 07:57:55.831067    1301 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fcl4p\" (UniqueName: \"kubernetes.io/projected/5e2bd4bf-ed32-453b-9529-69c50b814ed5-kube-api-access-fcl4p\") on node \"addons-959783\" DevicePath \"\""
	Nov 23 07:57:56 addons-959783 kubelet[1301]: I1123 07:57:56.212196    1301 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e2bd4bf-ed32-453b-9529-69c50b814ed5" path="/var/lib/kubelet/pods/5e2bd4bf-ed32-453b-9529-69c50b814ed5/volumes"
	Nov 23 07:57:56 addons-959783 kubelet[1301]: I1123 07:57:56.619196    1301 scope.go:117] "RemoveContainer" containerID="96582ed9e5732a084eb6a0d842558b8d91e8e6a3766e74a988cad3c1fadcd213"
	Nov 23 07:57:57 addons-959783 kubelet[1301]: I1123 07:57:57.744477    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kslp6\" (UniqueName: \"kubernetes.io/projected/9713a485-d528-4f08-9f66-96c3e5b2f714-kube-api-access-kslp6\") pod \"9713a485-d528-4f08-9f66-96c3e5b2f714\" (UID: \"9713a485-d528-4f08-9f66-96c3e5b2f714\") "
	Nov 23 07:57:57 addons-959783 kubelet[1301]: I1123 07:57:57.744538    1301 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9713a485-d528-4f08-9f66-96c3e5b2f714-gcp-creds\") pod \"9713a485-d528-4f08-9f66-96c3e5b2f714\" (UID: \"9713a485-d528-4f08-9f66-96c3e5b2f714\") "
	Nov 23 07:57:57 addons-959783 kubelet[1301]: I1123 07:57:57.744753    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9713a485-d528-4f08-9f66-96c3e5b2f714-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "9713a485-d528-4f08-9f66-96c3e5b2f714" (UID: "9713a485-d528-4f08-9f66-96c3e5b2f714"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 07:57:57 addons-959783 kubelet[1301]: I1123 07:57:57.746615    1301 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9713a485-d528-4f08-9f66-96c3e5b2f714-kube-api-access-kslp6" (OuterVolumeSpecName: "kube-api-access-kslp6") pod "9713a485-d528-4f08-9f66-96c3e5b2f714" (UID: "9713a485-d528-4f08-9f66-96c3e5b2f714"). InnerVolumeSpecName "kube-api-access-kslp6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 07:57:57 addons-959783 kubelet[1301]: I1123 07:57:57.845960    1301 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kslp6\" (UniqueName: \"kubernetes.io/projected/9713a485-d528-4f08-9f66-96c3e5b2f714-kube-api-access-kslp6\") on node \"addons-959783\" DevicePath \"\""
	Nov 23 07:57:57 addons-959783 kubelet[1301]: I1123 07:57:57.846002    1301 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9713a485-d528-4f08-9f66-96c3e5b2f714-gcp-creds\") on node \"addons-959783\" DevicePath \"\""
	Nov 23 07:57:58 addons-959783 kubelet[1301]: I1123 07:57:58.213155    1301 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9713a485-d528-4f08-9f66-96c3e5b2f714" path="/var/lib/kubelet/pods/9713a485-d528-4f08-9f66-96c3e5b2f714/volumes"
	
	
	==> storage-provisioner [8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8] <==
	W1123 07:57:33.488031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:35.490037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:35.494099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:37.497370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:37.500839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:39.503213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:39.506603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:41.509023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:41.512534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:43.515719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:43.519012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:45.522150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:45.526948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:47.529766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:47.533566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:49.536297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:49.542971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:51.545737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:51.549132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:53.551743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:53.555898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:55.558658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:55.562743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:57.566128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 07:57:57.570020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-959783 -n addons-959783
helpers_test.go:269: (dbg) Run:  kubectl --context addons-959783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-cjxlj ingress-nginx-admission-patch-zf9fd registry-creds-764b6fb674-5nncl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-959783 describe pod ingress-nginx-admission-create-cjxlj ingress-nginx-admission-patch-zf9fd registry-creds-764b6fb674-5nncl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-959783 describe pod ingress-nginx-admission-create-cjxlj ingress-nginx-admission-patch-zf9fd registry-creds-764b6fb674-5nncl: exit status 1 (56.415918ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cjxlj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zf9fd" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-5nncl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-959783 describe pod ingress-nginx-admission-create-cjxlj ingress-nginx-admission-patch-zf9fd registry-creds-764b6fb674-5nncl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable headlamp --alsologtostderr -v=1: exit status 11 (243.102062ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:57:59.089817   25928 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:57:59.089963   25928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:59.089972   25928 out.go:374] Setting ErrFile to fd 2...
	I1123 07:57:59.089976   25928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:59.090158   25928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:57:59.090545   25928 mustload.go:66] Loading cluster: addons-959783
	I1123 07:57:59.090866   25928 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:59.090882   25928 addons.go:622] checking whether the cluster is paused
	I1123 07:57:59.090968   25928 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:59.090980   25928 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:57:59.091317   25928 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:57:59.110885   25928 ssh_runner.go:195] Run: systemctl --version
	I1123 07:57:59.110950   25928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:57:59.128050   25928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:57:59.227751   25928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:57:59.227817   25928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:57:59.256984   25928 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:57:59.257009   25928 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:57:59.257015   25928 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:57:59.257020   25928 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:57:59.257025   25928 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:57:59.257030   25928 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:57:59.257034   25928 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:57:59.257038   25928 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:57:59.257042   25928 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:57:59.257058   25928 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:57:59.257065   25928 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:57:59.257069   25928 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:57:59.257074   25928 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:57:59.257081   25928 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:57:59.257086   25928 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:57:59.257099   25928 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:57:59.257107   25928 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:57:59.257113   25928 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:57:59.257118   25928 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:57:59.257123   25928 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:57:59.257133   25928 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:57:59.257140   25928 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:57:59.257144   25928 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:57:59.257147   25928 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:57:59.257150   25928 cri.go:89] found id: ""
	I1123 07:57:59.257196   25928 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:57:59.270951   25928 out.go:203] 
	W1123 07:57:59.271974   25928 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:57:59.271995   25928 out.go:285] * 
	* 
	W1123 07:57:59.274843   25928 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:57:59.275845   25928 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.47s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-sfxnv" [3981f3c4-40cd-4e41-a77e-4b7a2fe56aef] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00260448s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (257.824414ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:58:02.875408   26827 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:58:02.875890   26827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:58:02.875905   26827 out.go:374] Setting ErrFile to fd 2...
	I1123 07:58:02.875912   26827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:58:02.876463   26827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:58:02.876893   26827 mustload.go:66] Loading cluster: addons-959783
	I1123 07:58:02.878023   26827 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:58:02.878051   26827 addons.go:622] checking whether the cluster is paused
	I1123 07:58:02.878189   26827 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:58:02.878212   26827 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:58:02.878750   26827 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:58:02.898568   26827 ssh_runner.go:195] Run: systemctl --version
	I1123 07:58:02.898632   26827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:58:02.916901   26827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:58:03.017659   26827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:58:03.017741   26827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:58:03.048221   26827 cri.go:89] found id: "ad1dfd5356782ae1a3eab35c55a8babfe8788ac17891691075fe655d8b74199b"
	I1123 07:58:03.048260   26827 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:58:03.048281   26827 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:58:03.048287   26827 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:58:03.048291   26827 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:58:03.048296   26827 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:58:03.048301   26827 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:58:03.048305   26827 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:58:03.048311   26827 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:58:03.048320   26827 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:58:03.048329   26827 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:58:03.048333   26827 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:58:03.048338   26827 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:58:03.048366   26827 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:58:03.048375   26827 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:58:03.048393   26827 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:58:03.048397   26827 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:58:03.048404   26827 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:58:03.048408   26827 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:58:03.048413   26827 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:58:03.048420   26827 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:58:03.048424   26827 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:58:03.048429   26827 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:58:03.048499   26827 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:58:03.048525   26827 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:58:03.048541   26827 cri.go:89] found id: ""
	I1123 07:58:03.048618   26827 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:58:03.064492   26827 out.go:203] 
	W1123 07:58:03.065659   26827 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:58:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:58:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:58:03.065679   26827 out.go:285] * 
	* 
	W1123 07:58:03.068629   26827 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:58:03.069764   26827 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-959783 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-959783 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-959783 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [17f97443-2bda-4594-a803-b55f6ca5c283] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [17f97443-2bda-4594-a803-b55f6ca5c283] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [17f97443-2bda-4594-a803-b55f6ca5c283] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.001933855s
addons_test.go:967: (dbg) Run:  kubectl --context addons-959783 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 ssh "cat /opt/local-path-provisioner/pvc-eb41d53f-743e-4287-8190-205dfc85238e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-959783 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-959783 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (236.9788ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:57:54.169872   24691 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:57:54.170030   24691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:54.170040   24691 out.go:374] Setting ErrFile to fd 2...
	I1123 07:57:54.170046   24691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:54.170325   24691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:57:54.170600   24691 mustload.go:66] Loading cluster: addons-959783
	I1123 07:57:54.170917   24691 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:54.170938   24691 addons.go:622] checking whether the cluster is paused
	I1123 07:57:54.171033   24691 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:54.171045   24691 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:57:54.171378   24691 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:57:54.188629   24691 ssh_runner.go:195] Run: systemctl --version
	I1123 07:57:54.188667   24691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:57:54.205344   24691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:57:54.303529   24691 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:57:54.303594   24691 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:57:54.331266   24691 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:57:54.331290   24691 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:57:54.331294   24691 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:57:54.331297   24691 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:57:54.331300   24691 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:57:54.331304   24691 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:57:54.331307   24691 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:57:54.331310   24691 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:57:54.331312   24691 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:57:54.331329   24691 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:57:54.331338   24691 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:57:54.331342   24691 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:57:54.331347   24691 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:57:54.331354   24691 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:57:54.331357   24691 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:57:54.331368   24691 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:57:54.331374   24691 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:57:54.331378   24691 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:57:54.331380   24691 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:57:54.331383   24691 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:57:54.331387   24691 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:57:54.331390   24691 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:57:54.331393   24691 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:57:54.331395   24691 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:57:54.331398   24691 cri.go:89] found id: ""
	I1123 07:57:54.331447   24691 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:57:54.345139   24691 out.go:203] 
	W1123 07:57:54.346666   24691 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:57:54.346682   24691 out.go:285] * 
	* 
	W1123 07:57:54.349647   24691 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:57:54.351225   24691 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-gft7l" [81c12107-652e-454a-9b52-5b44ffb4e5f9] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00306908s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (257.084609ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:57:50.357646   24343 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:57:50.357937   24343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:50.357948   24343 out.go:374] Setting ErrFile to fd 2...
	I1123 07:57:50.357952   24343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:50.358170   24343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:57:50.358493   24343 mustload.go:66] Loading cluster: addons-959783
	I1123 07:57:50.358960   24343 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:50.358987   24343 addons.go:622] checking whether the cluster is paused
	I1123 07:57:50.359106   24343 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:50.359117   24343 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:57:50.359702   24343 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:57:50.378214   24343 ssh_runner.go:195] Run: systemctl --version
	I1123 07:57:50.378256   24343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:57:50.395822   24343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:57:50.496354   24343 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:57:50.496447   24343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:57:50.530918   24343 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:57:50.530940   24343 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:57:50.530947   24343 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:57:50.530952   24343 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:57:50.530957   24343 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:57:50.530962   24343 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:57:50.530966   24343 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:57:50.530971   24343 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:57:50.530975   24343 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:57:50.530982   24343 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:57:50.530992   24343 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:57:50.530997   24343 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:57:50.531002   24343 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:57:50.531010   24343 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:57:50.531015   24343 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:57:50.531028   24343 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:57:50.531044   24343 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:57:50.531050   24343 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:57:50.531054   24343 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:57:50.531061   24343 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:57:50.531069   24343 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:57:50.531073   24343 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:57:50.531076   24343 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:57:50.531079   24343 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:57:50.531081   24343 cri.go:89] found id: ""
	I1123 07:57:50.531114   24343 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:57:50.544405   24343 out.go:203] 
	W1123 07:57:50.545352   24343 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:57:50.545372   24343 out.go:285] * 
	* 
	W1123 07:57:50.550297   24343 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:57:50.551556   24343 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-tx6dk" [ca4f8723-a471-458e-be09-4525c960973a] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003604454s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable yakd --alsologtostderr -v=1: exit status 11 (247.497754ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:57:56.613636   24897 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:57:56.613784   24897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:56.613796   24897 out.go:374] Setting ErrFile to fd 2...
	I1123 07:57:56.613803   24897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:56.614056   24897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:57:56.614355   24897 mustload.go:66] Loading cluster: addons-959783
	I1123 07:57:56.615392   24897 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:56.615428   24897 addons.go:622] checking whether the cluster is paused
	I1123 07:57:56.615599   24897 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:56.615613   24897 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:57:56.616538   24897 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:57:56.639231   24897 ssh_runner.go:195] Run: systemctl --version
	I1123 07:57:56.639280   24897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:57:56.656584   24897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:57:56.754639   24897 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:57:56.754735   24897 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:57:56.785077   24897 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:57:56.785098   24897 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:57:56.785105   24897 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:57:56.785110   24897 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:57:56.785115   24897 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:57:56.785120   24897 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:57:56.785125   24897 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:57:56.785135   24897 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:57:56.785140   24897 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:57:56.785147   24897 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:57:56.785155   24897 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:57:56.785160   24897 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:57:56.785164   24897 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:57:56.785169   24897 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:57:56.785173   24897 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:57:56.785185   24897 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:57:56.785190   24897 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:57:56.785197   24897 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:57:56.785201   24897 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:57:56.785209   24897 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:57:56.785220   24897 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:57:56.785235   24897 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:57:56.785240   24897 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:57:56.785252   24897 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:57:56.785257   24897 cri.go:89] found id: ""
	I1123 07:57:56.785297   24897 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:57:56.798791   24897 out.go:203] 
	W1123 07:57:56.799714   24897 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:57:56.799733   24897 out.go:285] * 
	* 
	W1123 07:57:56.802585   24897 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:57:56.803528   24897 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-kcdzf" [e3f0739c-033b-404d-8651-715b88a2e213] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003010815s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-959783 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959783 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (256.04313ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 07:57:50.357672   24342 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:57:50.357826   24342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:50.357833   24342 out.go:374] Setting ErrFile to fd 2...
	I1123 07:57:50.357837   24342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:57:50.358200   24342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:57:50.358596   24342 mustload.go:66] Loading cluster: addons-959783
	I1123 07:57:50.359556   24342 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:50.359582   24342 addons.go:622] checking whether the cluster is paused
	I1123 07:57:50.359764   24342 config.go:182] Loaded profile config "addons-959783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 07:57:50.359784   24342 host.go:66] Checking if "addons-959783" exists ...
	I1123 07:57:50.360255   24342 cli_runner.go:164] Run: docker container inspect addons-959783 --format={{.State.Status}}
	I1123 07:57:50.377779   24342 ssh_runner.go:195] Run: systemctl --version
	I1123 07:57:50.377837   24342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-959783
	I1123 07:57:50.394097   24342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/addons-959783/id_rsa Username:docker}
	I1123 07:57:50.493376   24342 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 07:57:50.493463   24342 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 07:57:50.528775   24342 cri.go:89] found id: "444c2f1efdf59675c8828e507928b6363aca984ebc848e8099519b32a9f956a6"
	I1123 07:57:50.528795   24342 cri.go:89] found id: "87f89d7ecfbd2e08fd8373cb9376596384aa2730856715fc8a026da87deecbd7"
	I1123 07:57:50.528801   24342 cri.go:89] found id: "85507fd98859169de16a8085f86e447d8c4f9d2258a7dfd191e1558a15970a4c"
	I1123 07:57:50.528806   24342 cri.go:89] found id: "de6c01c726f845b43d15eb3ebb6b609548b0de558954c14115bfdec19c1e12d6"
	I1123 07:57:50.528810   24342 cri.go:89] found id: "d835cb4f7791dec6b9ef0598923132288ef7700038af5078037a8177d2110361"
	I1123 07:57:50.528815   24342 cri.go:89] found id: "8cc8cf367fdab3959823482801766117607d028a1d0722c8c1347f6cb1bb7394"
	I1123 07:57:50.528819   24342 cri.go:89] found id: "ef34710dda6e2b6dae1ca823c04527a65cb56771df96b1661e376cd5ec670d1d"
	I1123 07:57:50.528823   24342 cri.go:89] found id: "3b156f686aa9bd714b886ec2756d95f8901c3d76fffe66531d02b282b8760765"
	I1123 07:57:50.528827   24342 cri.go:89] found id: "40e801de9fbe0ef8409b29165f4350fbb8838da48d764ddbf3ca55f743b4a4d3"
	I1123 07:57:50.528835   24342 cri.go:89] found id: "fc199ed96e0248d636bba2e0ee9bcc720381c4c23a179471b1503ccbb50c8076"
	I1123 07:57:50.528839   24342 cri.go:89] found id: "01ca798c81384f6be50eb9db03df8330793444e9ae7d31e47360a3c3a6b9bb0c"
	I1123 07:57:50.528844   24342 cri.go:89] found id: "1765f478745cdaebd1061823a021d738ced3ed8facba8e0e4a29aa2d8e1834b8"
	I1123 07:57:50.528850   24342 cri.go:89] found id: "651380a78efa567fff6693d0684f042e3814b1df3375655376fcd4acee977498"
	I1123 07:57:50.528860   24342 cri.go:89] found id: "3a1961ad35159a04cefd56ffa2882062661222bc8be5bf6f5cb7404b6d57bb53"
	I1123 07:57:50.528865   24342 cri.go:89] found id: "8fb1acf83526f62e1eac53ae8171fec5afa7054a3d22c6f6fa4d7dbc94d052ae"
	I1123 07:57:50.528879   24342 cri.go:89] found id: "dcae7b911caf99177970f4c0a4b67f9b6195062d70aeb792f4015dd4216bccfa"
	I1123 07:57:50.528884   24342 cri.go:89] found id: "3d968d545ec05f03918be23c98eae9ba86e4738704c7f106352f2c976eff590b"
	I1123 07:57:50.528890   24342 cri.go:89] found id: "8571ca641d95800b64ad891df3714946a1c89f3c454d409307e8cb1dab367cd8"
	I1123 07:57:50.528895   24342 cri.go:89] found id: "792f2602e690ae2e36b220c4c744e3673ae62ff907d1cd829f37d5d13ac5d687"
	I1123 07:57:50.528900   24342 cri.go:89] found id: "adf924f9387e3751f6d967ae80e24a7a33cbbde363739eeba07da085915b11ed"
	I1123 07:57:50.528907   24342 cri.go:89] found id: "6e081d40a1a881673018d949b5cb676d51b0e636b4fcdba26f1aef9b6f374527"
	I1123 07:57:50.528910   24342 cri.go:89] found id: "e05878f9bf96be8857dda14e67fa15d9ac60cb7e1512ee80285fd6e622d54779"
	I1123 07:57:50.528914   24342 cri.go:89] found id: "9e76f19262eee0a6583afdd49b48d12b68237527f9f83f936f004cc81b9ecad8"
	I1123 07:57:50.528918   24342 cri.go:89] found id: "f6524b0b95cffedd231fa5fbf834883a6ffda4deda75162fa19a047ceb3e8eea"
	I1123 07:57:50.528922   24342 cri.go:89] found id: ""
	I1123 07:57:50.528974   24342 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 07:57:50.543379   24342 out.go:203] 
	W1123 07:57:50.544452   24342 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T07:57:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 07:57:50.544475   24342 out.go:285] * 
	* 
	W1123 07:57:50.549587   24342 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 07:57:50.550860   24342 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-959783 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-762247 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-762247 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-lm72j" [ed72bdd6-61e2-4de2-8449-6633d8405528] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-762247 -n functional-762247
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-23 08:14:51.709009051 +0000 UTC m=+1168.208459770
functional_test.go:1645: (dbg) Run:  kubectl --context functional-762247 describe po hello-node-connect-7d85dfc575-lm72j -n default
functional_test.go:1645: (dbg) kubectl --context functional-762247 describe po hello-node-connect-7d85dfc575-lm72j -n default:
Name:             hello-node-connect-7d85dfc575-lm72j
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-762247/192.168.49.2
Start Time:       Sun, 23 Nov 2025 08:04:51 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cb5mf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cb5mf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-lm72j to functional-762247
Normal   Pulling    7m4s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-762247 logs hello-node-connect-7d85dfc575-lm72j -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-762247 logs hello-node-connect-7d85dfc575-lm72j -n default: exit status 1 (64.139052ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-lm72j" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-762247 logs hello-node-connect-7d85dfc575-lm72j -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-762247 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-lm72j
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-762247/192.168.49.2
Start Time:       Sun, 23 Nov 2025 08:04:51 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cb5mf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cb5mf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-lm72j to functional-762247
Normal   Pulling    7m4s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-762247 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-762247 logs -l app=hello-node-connect: exit status 1 (57.000612ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-lm72j" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-762247 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-762247 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.14.28
IPs:                      10.100.14.28
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32048/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-762247
helpers_test.go:243: (dbg) docker inspect functional-762247:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43f338eff7e75cb1ec7af93e195bb8bc5ef5d3a836e1112af78ab50cbfa66029",
	        "Created": "2025-11-23T08:01:40.086517664Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 38182,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:01:40.114612931Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/43f338eff7e75cb1ec7af93e195bb8bc5ef5d3a836e1112af78ab50cbfa66029/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43f338eff7e75cb1ec7af93e195bb8bc5ef5d3a836e1112af78ab50cbfa66029/hostname",
	        "HostsPath": "/var/lib/docker/containers/43f338eff7e75cb1ec7af93e195bb8bc5ef5d3a836e1112af78ab50cbfa66029/hosts",
	        "LogPath": "/var/lib/docker/containers/43f338eff7e75cb1ec7af93e195bb8bc5ef5d3a836e1112af78ab50cbfa66029/43f338eff7e75cb1ec7af93e195bb8bc5ef5d3a836e1112af78ab50cbfa66029-json.log",
	        "Name": "/functional-762247",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-762247:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-762247",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43f338eff7e75cb1ec7af93e195bb8bc5ef5d3a836e1112af78ab50cbfa66029",
	                "LowerDir": "/var/lib/docker/overlay2/3ed12af6c5138521a1c84d335f9b7f4c494b339390f6ef12d329bc9039e8323b-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ed12af6c5138521a1c84d335f9b7f4c494b339390f6ef12d329bc9039e8323b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ed12af6c5138521a1c84d335f9b7f4c494b339390f6ef12d329bc9039e8323b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ed12af6c5138521a1c84d335f9b7f4c494b339390f6ef12d329bc9039e8323b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-762247",
	                "Source": "/var/lib/docker/volumes/functional-762247/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-762247",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-762247",
	                "name.minikube.sigs.k8s.io": "functional-762247",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "69055a0fc4a6901cac2e2fd296572ca0950c79253ed5a460396f303c82888e66",
	            "SandboxKey": "/var/run/docker/netns/69055a0fc4a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-762247": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4bb669cccdf0694a13fcb4910d664a965d1cc9245489e84ba0e1de82a7ceef3d",
	                    "EndpointID": "57a0e8d1e3ee3dcbaf25eac95939162efe06bd3fc7333bc989ac73498783f5c3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "16:3f:75:5b:8c:e9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-762247",
	                        "43f338eff7e7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-762247 -n functional-762247
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-762247 logs -n 25: (1.207690023s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-762247 ssh sudo cat /etc/ssl/certs/14488.pem                                                                    │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ ssh            │ functional-762247 ssh sudo cat /usr/share/ca-certificates/14488.pem                                                        │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ ssh            │ functional-762247 ssh sudo cat /etc/ssl/certs/51391683.0                                                                   │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ ssh            │ functional-762247 ssh sudo cat /etc/ssl/certs/144882.pem                                                                   │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ ssh            │ functional-762247 ssh sudo cat /usr/share/ca-certificates/144882.pem                                                       │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ ssh            │ functional-762247 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ cp             │ functional-762247 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ ssh            │ functional-762247 ssh -n functional-762247 sudo cat /home/docker/cp-test.txt                                               │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ cp             │ functional-762247 cp functional-762247:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1518080208/001/cp-test.txt │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ ssh            │ functional-762247 ssh -n functional-762247 sudo cat /home/docker/cp-test.txt                                               │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ cp             │ functional-762247 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ ssh            │ functional-762247 ssh -n functional-762247 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ start          │ -p functional-762247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │                     │
	│ start          │ -p functional-762247 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                            │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-762247 --alsologtostderr -v=1                                                             │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ image          │ functional-762247 image ls --format short --alsologtostderr                                                                │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ image          │ functional-762247 image ls --format yaml --alsologtostderr                                                                 │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ ssh            │ functional-762247 ssh pgrep buildkitd                                                                                      │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │                     │
	│ image          │ functional-762247 image ls --format json --alsologtostderr                                                                 │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ image          │ functional-762247 image build -t localhost/my-image:functional-762247 testdata/build --alsologtostderr                     │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ image          │ functional-762247 image ls --format table --alsologtostderr                                                                │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ update-context │ functional-762247 update-context --alsologtostderr -v=2                                                                    │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ update-context │ functional-762247 update-context --alsologtostderr -v=2                                                                    │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ update-context │ functional-762247 update-context --alsologtostderr -v=2                                                                    │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	│ image          │ functional-762247 image ls                                                                                                 │ functional-762247 │ jenkins │ v1.37.0 │ 23 Nov 25 08:05 UTC │ 23 Nov 25 08:05 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:05:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:05:17.906811   53985 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:05:17.906935   53985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:05:17.906941   53985 out.go:374] Setting ErrFile to fd 2...
	I1123 08:05:17.906947   53985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:05:17.907359   53985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:05:17.907821   53985 out.go:368] Setting JSON to false
	I1123 08:05:17.908756   53985 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2865,"bootTime":1763882253,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:05:17.908804   53985 start.go:143] virtualization: kvm guest
	I1123 08:05:17.910393   53985 out.go:179] * [functional-762247] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:05:17.911422   53985 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:05:17.911484   53985 notify.go:221] Checking for updates...
	I1123 08:05:17.913906   53985 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:05:17.914879   53985 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:05:17.916012   53985 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:05:17.920081   53985 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:05:17.921125   53985 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:05:17.922472   53985 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:05:17.922964   53985 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:05:17.946251   53985 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:05:17.946351   53985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:05:18.001446   53985 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-23 08:05:17.990446801 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:05:18.001578   53985 docker.go:319] overlay module found
	I1123 08:05:18.005083   53985 out.go:179] * Using the docker driver based on existing profile
	I1123 08:05:18.006043   53985 start.go:309] selected driver: docker
	I1123 08:05:18.006055   53985 start.go:927] validating driver "docker" against &{Name:functional-762247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-762247 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:05:18.006136   53985 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:05:18.006213   53985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:05:18.065078   53985 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-23 08:05:18.055112724 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:05:18.065707   53985 cni.go:84] Creating CNI manager for ""
	I1123 08:05:18.065770   53985 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:05:18.065823   53985 start.go:353] cluster config:
	{Name:functional-762247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-762247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:05:18.067226   53985 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 23 08:05:23 functional-762247 crio[3613]: time="2025-11-23T08:05:23.601290115Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:05:23 functional-762247 crio[3613]: time="2025-11-23T08:05:23.605414438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:05:23 functional-762247 crio[3613]: time="2025-11-23T08:05:23.605584575Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8a5d93025deab21937b37fd1fc245458eec5e17afa7833d0ba2e40abe448413d/merged/etc/group: no such file or directory"
	Nov 23 08:05:23 functional-762247 crio[3613]: time="2025-11-23T08:05:23.605903082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:05:23 functional-762247 crio[3613]: time="2025-11-23T08:05:23.632427154Z" level=info msg="Created container 23d2d9ce4cd1f255759836f627c677a0591b7d00e98a7c4b624b5a741c41a180: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5q45/kubernetes-dashboard" id=8de904e7-7c9b-4483-b1d1-2e40bf5bc059 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:05:23 functional-762247 crio[3613]: time="2025-11-23T08:05:23.632924322Z" level=info msg="Starting container: 23d2d9ce4cd1f255759836f627c677a0591b7d00e98a7c4b624b5a741c41a180" id=6b231b2f-9abc-49d3-832a-d859823cb21f name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:05:23 functional-762247 crio[3613]: time="2025-11-23T08:05:23.634578664Z" level=info msg="Started container" PID=7731 containerID=23d2d9ce4cd1f255759836f627c677a0591b7d00e98a7c4b624b5a741c41a180 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h5q45/kubernetes-dashboard id=6b231b2f-9abc-49d3-832a-d859823cb21f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd312c0cd2655849b90b5cf52c7c79ec169bd1af15c5cba083947a8fa45612eb
	Nov 23 08:05:26 functional-762247 crio[3613]: time="2025-11-23T08:05:26.990024124Z" level=info msg="Stopping pod sandbox: 086e82d452cbc54436bb1412ec66e86333a6d12bbe4ac3dc2705b34a1ddd2f12" id=78dae59e-c185-431c-803d-5b344d662330 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 08:05:26 functional-762247 crio[3613]: time="2025-11-23T08:05:26.990087293Z" level=info msg="Stopped pod sandbox (already stopped): 086e82d452cbc54436bb1412ec66e86333a6d12bbe4ac3dc2705b34a1ddd2f12" id=78dae59e-c185-431c-803d-5b344d662330 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 08:05:26 functional-762247 crio[3613]: time="2025-11-23T08:05:26.990424992Z" level=info msg="Removing pod sandbox: 086e82d452cbc54436bb1412ec66e86333a6d12bbe4ac3dc2705b34a1ddd2f12" id=f52d75cc-3361-4586-ac77-1aba5c47fb11 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 08:05:26 functional-762247 crio[3613]: time="2025-11-23T08:05:26.99961752Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:05:26 functional-762247 crio[3613]: time="2025-11-23T08:05:26.99967834Z" level=info msg="Removed pod sandbox: 086e82d452cbc54436bb1412ec66e86333a6d12bbe4ac3dc2705b34a1ddd2f12" id=f52d75cc-3361-4586-ac77-1aba5c47fb11 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 08:05:27 functional-762247 crio[3613]: time="2025-11-23T08:05:27.000085082Z" level=info msg="Stopping pod sandbox: 005432823b7fdabcd518ddaa648901e765891253fe92bd19cfe7635ae4936c48" id=63483910-d4be-4572-8351-7f2242609862 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 08:05:27 functional-762247 crio[3613]: time="2025-11-23T08:05:27.000139492Z" level=info msg="Stopped pod sandbox (already stopped): 005432823b7fdabcd518ddaa648901e765891253fe92bd19cfe7635ae4936c48" id=63483910-d4be-4572-8351-7f2242609862 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 08:05:27 functional-762247 crio[3613]: time="2025-11-23T08:05:27.000453027Z" level=info msg="Removing pod sandbox: 005432823b7fdabcd518ddaa648901e765891253fe92bd19cfe7635ae4936c48" id=845be765-a8a6-481b-a172-4efa8f2473f0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 08:05:27 functional-762247 crio[3613]: time="2025-11-23T08:05:27.00280253Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:05:27 functional-762247 crio[3613]: time="2025-11-23T08:05:27.00285663Z" level=info msg="Removed pod sandbox: 005432823b7fdabcd518ddaa648901e765891253fe92bd19cfe7635ae4936c48" id=845be765-a8a6-481b-a172-4efa8f2473f0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 08:05:31 functional-762247 crio[3613]: time="2025-11-23T08:05:31.991418289Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1ee58ada-cb17-44fa-9cac-dc54b8780731 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:05:31 functional-762247 crio[3613]: time="2025-11-23T08:05:31.992174172Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1c4e43c1-0943-472d-b639-d3d3c2ed9f6d name=/runtime.v1.ImageService/PullImage
	Nov 23 08:06:19 functional-762247 crio[3613]: time="2025-11-23T08:06:19.991203222Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ad6e0b2f-bdf4-4024-b318-81c930bfb1ee name=/runtime.v1.ImageService/PullImage
	Nov 23 08:06:21 functional-762247 crio[3613]: time="2025-11-23T08:06:21.991247175Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0992010d-0ad9-4450-96ff-bdcad3ae3604 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:07:46 functional-762247 crio[3613]: time="2025-11-23T08:07:46.991979536Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3c4e72d5-b3b7-48ae-a9a0-b1f2e01cc9ad name=/runtime.v1.ImageService/PullImage
	Nov 23 08:07:47 functional-762247 crio[3613]: time="2025-11-23T08:07:47.991720879Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=86a9ea08-29e2-49a4-ab77-3712e38d1a94 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:10:30 functional-762247 crio[3613]: time="2025-11-23T08:10:30.991762959Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fc4bd2ae-07f2-4be6-9d77-1673221283f3 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:10:35 functional-762247 crio[3613]: time="2025-11-23T08:10:35.991086244Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=149984a5-2915-4154-b19e-6072065b0662 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	23d2d9ce4cd1f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   fd312c0cd2655       kubernetes-dashboard-855c9754f9-h5q45        kubernetes-dashboard
	5df451f55b319       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   23b0ba6be1117       dashboard-metrics-scraper-77bf4d6c4c-4zf6h   kubernetes-dashboard
	ac78a7c0d3e3d       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   a76da18572369       mysql-5bb876957f-rm9t2                       default
	256691edbdb86       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   2154ac7adf8e6       sp-pod                                       default
	28d890d0f400f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   9c0cc06603af1       busybox-mount                                default
	45d77afc23478       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   0cc7ab7724123       nginx-svc                                    default
	19eda2092a067       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 11 minutes ago      Running             kube-apiserver              2                   7e08a26bfee0f       kube-apiserver-functional-762247             kube-system
	b638a90434ba2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Running             kube-controller-manager     2                   38ac96e725efc       kube-controller-manager-functional-762247    kube-system
	a8b676cb26dbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         2                   770585e36330d       storage-provisioner                          kube-system
	1752571ace231       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 11 minutes ago      Exited              kube-apiserver              1                   7e08a26bfee0f       kube-apiserver-functional-762247             kube-system
	fab0f7342d7c7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Running             kube-scheduler              1                   f824a8043e605       kube-scheduler-functional-762247             kube-system
	740467202a3c7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     1                   38ac96e725efc       kube-controller-manager-functional-762247    kube-system
	c7a91ad86ad34       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Running             etcd                        1                   13441b11a74c6       etcd-functional-762247                       kube-system
	ab33e6a8da218       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   66a23f58723b7       kube-proxy-8mrhn                             kube-system
	d7ab42dc35495       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   770585e36330d       storage-provisioner                          kube-system
	cecccea5d9633       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   285a3536208c6       coredns-66bc5c9577-szgql                     kube-system
	1f295aed842c3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   fea632ee19bb9       kindnet-k2k82                                kube-system
	f818e73af08e7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 12 minutes ago      Exited              coredns                     0                   285a3536208c6       coredns-66bc5c9577-szgql                     kube-system
	af06919fc0330       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 12 minutes ago      Exited              kube-proxy                  0                   66a23f58723b7       kube-proxy-8mrhn                             kube-system
	c6df6afc1bd44       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   fea632ee19bb9       kindnet-k2k82                                kube-system
	b941aa9a61efd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 13 minutes ago      Exited              etcd                        0                   13441b11a74c6       etcd-functional-762247                       kube-system
	897d530ad176e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 13 minutes ago      Exited              kube-scheduler              0                   f824a8043e605       kube-scheduler-functional-762247             kube-system
	
	
	==> coredns [cecccea5d96330e6343eae363898c44e17ef2f4287f8574579d811f25050b095] <==
	[INFO] 127.0.0.1:39184 - 19413 "HINFO IN 808672003014588905.7743101680477015606. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.061411656s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f818e73af08e7ca20515a553cea30ed06e2a0d7f70c55a0c3440b28af77df0d0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51200 - 61167 "HINFO IN 4398178143172103777.2503951063608228337. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.509027854s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-762247
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-762247
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=functional-762247
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_01_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:01:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-762247
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:14:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:14:29 +0000   Sun, 23 Nov 2025 08:01:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:14:29 +0000   Sun, 23 Nov 2025 08:01:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:14:29 +0000   Sun, 23 Nov 2025 08:01:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:14:29 +0000   Sun, 23 Nov 2025 08:02:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-762247
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c8ea1ad3-b96a-4635-b98f-b0405089fda1
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-qqdxc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  default                     hello-node-connect-7d85dfc575-lm72j           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-rm9t2                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m42s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 coredns-66bc5c9577-szgql                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-762247                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-k2k82                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-762247              250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-762247     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-8mrhn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-762247              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4zf6h    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-h5q45         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 10m   kube-proxy       
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node functional-762247 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node functional-762247 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node functional-762247 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m   node-controller  Node functional-762247 event: Registered Node functional-762247 in Controller
	  Normal   NodeReady                12m   kubelet          Node functional-762247 status is now: NodeReady
	  Warning  ContainerGCFailed        11m   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node functional-762247 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node functional-762247 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node functional-762247 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m   node-controller  Node functional-762247 event: Registered Node functional-762247 in Controller
	
	
	==> dmesg <==
	[  +0.079858] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024030] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.151122] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 07:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.034290] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +2.047767] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +4.031598] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +8.127154] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[ +16.382339] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[Nov23 07:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	
	
	==> etcd [b941aa9a61efd405d95245fa6145a35969474e7b917d6e224299bce8c2da1b00] <==
	{"level":"warn","ts":"2025-11-23T08:01:53.186668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:01:53.193058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:01:53.199307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:01:53.214757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:01:53.220257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:01:53.226766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:01:53.272214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42442","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:03:24.907632Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T08:03:24.907722Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-762247","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-23T08:03:24.907795Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T08:03:24.909323Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T08:03:24.909374Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:03:24.909390Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-23T08:03:24.909465Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-23T08:03:24.909483Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-23T08:03:24.909475Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T08:03:24.909503Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T08:03:24.909517Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T08:03:24.909492Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T08:03:24.909536Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T08:03:24.909546Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:03:24.911005Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-23T08:03:24.911063Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:03:24.911089Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-23T08:03:24.911095Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-762247","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [c7a91ad86ad345a3673076b36e2959e790fe6bbfd269f0eb1a4ed91616fe7efc] <==
	{"level":"warn","ts":"2025-11-23T08:03:51.043706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.050095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.056560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.062820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.077143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.082554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.088307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.094583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.100923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.106510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.112180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.118037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.123714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.129547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.135494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.141286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.146872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.153926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.159460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.178318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.184061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:03:51.190377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51836","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:13:50.798178Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1140}
	{"level":"info","ts":"2025-11-23T08:13:50.817432Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1140,"took":"18.914815ms","hash":3506258239,"current-db-size-bytes":3506176,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-23T08:13:50.817467Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3506258239,"revision":1140,"compact-revision":-1}
	
	
	==> kernel <==
	 08:14:53 up 57 min,  0 user,  load average: 0.02, 0.15, 0.28
	Linux functional-762247 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1f295aed842c37d5acc4caf5cc80d9f16c61b6e0420f49fbb32edf7df63f0e49] <==
	I1123 08:12:45.423191       1 main.go:301] handling current node
	I1123 08:12:55.427627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:12:55.427675       1 main.go:301] handling current node
	I1123 08:13:05.426138       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:13:05.426181       1 main.go:301] handling current node
	I1123 08:13:15.422947       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:13:15.422982       1 main.go:301] handling current node
	I1123 08:13:25.428137       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:13:25.428176       1 main.go:301] handling current node
	I1123 08:13:35.429240       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:13:35.429278       1 main.go:301] handling current node
	I1123 08:13:45.422946       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:13:45.422976       1 main.go:301] handling current node
	I1123 08:13:55.431995       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:13:55.432033       1 main.go:301] handling current node
	I1123 08:14:05.426224       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:14:05.426267       1 main.go:301] handling current node
	I1123 08:14:15.423756       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:14:15.423787       1 main.go:301] handling current node
	I1123 08:14:25.425755       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:14:25.425788       1 main.go:301] handling current node
	I1123 08:14:35.427883       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:14:35.427913       1 main.go:301] handling current node
	I1123 08:14:45.423772       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:14:45.423814       1 main.go:301] handling current node
	
	
	==> kindnet [c6df6afc1bd4416f9468eca136823a0430bc01b33155498bd2bfd36204cdeb6e] <==
	I1123 08:02:01.866529       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1123 08:02:01.866662       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:02:01.866682       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:02:01.866724       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:02:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:02:02.068832       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:02:02.068878       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:02:02.068890       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:02:02.069302       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:02:32.069558       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:02:32.069737       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:02:32.069842       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:02:32.069928       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 08:02:33.369002       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:02:33.369029       1 metrics.go:72] Registering metrics
	I1123 08:02:33.369097       1 controller.go:711] "Syncing nftables rules"
	I1123 08:02:42.072530       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:02:42.072576       1 main.go:301] handling current node
	I1123 08:02:52.070951       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:02:52.070980       1 main.go:301] handling current node
	I1123 08:03:02.072783       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:03:02.072815       1 main.go:301] handling current node
	I1123 08:03:12.069571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:03:12.069604       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1752571ace2318f2e490ce006ccd1d66cbfda110c8b306b30dc154c1efc84ae6] <==
	I1123 08:03:28.125999       1 options.go:263] external host was not specified, using 192.168.49.2
	I1123 08:03:28.128334       1 server.go:150] Version: v1.34.1
	I1123 08:03:28.128357       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1123 08:03:28.128645       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [19eda2092a06740879bbf945cb68ca518aa179fd55c55760d2611667584a21d4] <==
	I1123 08:03:51.681137       1 policy_source.go:240] refreshing policies
	I1123 08:03:51.685545       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:03:51.704734       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:03:52.569708       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1123 08:03:52.774628       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1123 08:03:52.779606       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:03:53.594590       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:03:55.096015       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:03:57.052086       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:04:44.859429       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.60.36"}
	I1123 08:04:50.451628       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.211.37"}
	I1123 08:04:51.394831       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.14.28"}
	I1123 08:04:55.291547       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.0.89"}
	E1123 08:05:06.261921       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49108: use of closed network connection
	I1123 08:05:10.986711       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.132.93"}
	E1123 08:05:15.523882       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58894: use of closed network connection
	I1123 08:05:18.871164       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:05:18.909738       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:05:18.918900       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:05:18.955165       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.206.76"}
	I1123 08:05:18.967994       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.83.145"}
	E1123 08:05:23.177723       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45596: use of closed network connection
	E1123 08:05:23.965935       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45626: use of closed network connection
	E1123 08:05:25.779200       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45654: use of closed network connection
	I1123 08:13:51.583798       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [740467202a3c74fb1bdc0910986b585954a328c36a179f36c9a2b2115554cfdf] <==
	I1123 08:03:28.457008       1 controllermanager.go:781] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1123 08:03:28.457030       1 controllermanager.go:744] "Warning: controller is disabled" controller="selinux-warning-controller"
	I1123 08:03:28.457058       1 shared_informer.go:349] "Waiting for caches to sync" controller="validatingadmissionpolicy-status"
	I1123 08:03:28.458901       1 controllermanager.go:781] "Started controller" controller="endpointslice-mirroring-controller"
	I1123 08:03:28.459006       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1123 08:03:28.459025       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint_slice_mirroring"
	I1123 08:03:28.469030       1 shared_informer.go:356] "Caches are synced" controller="tokens"
	I1123 08:03:28.531252       1 controllermanager.go:781] "Started controller" controller="namespace-controller"
	I1123 08:03:28.531286       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1123 08:03:28.531306       1 shared_informer.go:349] "Waiting for caches to sync" controller="namespace"
	I1123 08:03:28.570151       1 controllermanager.go:781] "Started controller" controller="deployment-controller"
	I1123 08:03:28.570250       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1123 08:03:28.570264       1 shared_informer.go:349] "Waiting for caches to sync" controller="deployment"
	I1123 08:03:28.620432       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1123 08:03:28.620449       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="node-route-controller"
	I1123 08:03:28.620477       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1123 08:03:28.670093       1 controllermanager.go:781] "Started controller" controller="ttl-after-finished-controller"
	I1123 08:03:28.670110       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1123 08:03:28.670153       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1123 08:03:28.670162       1 shared_informer.go:349] "Waiting for caches to sync" controller="TTL after finished"
	I1123 08:03:28.821819       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1123 08:03:28.821845       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 08:03:28.821871       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1123 08:03:28.821961       1 controllermanager.go:781] "Started controller" controller="garbage-collector-controller"
	F1123 08:03:29.018734       1 client_builder_dynamic.go:138] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/disruption-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [b638a90434ba2f329ab821ec5edc96cc4daa9140ccf55df5b05894d03b330ad3] <==
	I1123 08:03:54.989732       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:03:54.990887       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:03:54.990911       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:03:54.990926       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:03:54.990939       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:03:54.990953       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:03:54.991001       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:03:54.991008       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:03:54.991009       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:03:54.995504       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:03:54.995511       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:03:55.005992       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:03:55.007094       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:03:55.007149       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:03:55.014294       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:03:55.016529       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:03:55.017633       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:03:55.019883       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:03:55.024066       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	E1123 08:05:18.909821       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 08:05:18.913395       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 08:05:18.918276       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 08:05:18.918362       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 08:05:18.921101       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 08:05:18.926130       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [ab33e6a8da218e99d5d1f7f24079bc8501c5c62126835588dc09f18d6ede03eb] <==
	E1123 08:03:16.202595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-762247&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:03:18.399830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-762247&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:03:24.562774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-762247&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:03:31.131159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-762247&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:03:49.700847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-762247&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1123 08:04:28.176340       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:04:28.176370       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 08:04:28.176449       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:04:28.194412       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:04:28.194463       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:04:28.199599       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:04:28.199928       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:04:28.199943       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:04:28.201453       1 config.go:200] "Starting service config controller"
	I1123 08:04:28.201476       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:04:28.201481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:04:28.201495       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:04:28.201548       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:04:28.201561       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:04:28.201612       1 config.go:309] "Starting node config controller"
	I1123 08:04:28.201620       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:04:28.201626       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:04:28.301637       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:04:28.301661       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:04:28.301702       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [af06919fc0330885192ad77eeb0d4c2ad568d5ed4b699973d403878cb163877c] <==
	I1123 08:02:01.724715       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:02:01.789986       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:02:01.891063       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:02:01.891116       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 08:02:01.891229       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:02:01.915521       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:02:01.915580       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:02:01.921884       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:02:01.922291       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:02:01.922308       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:02:01.923796       1 config.go:200] "Starting service config controller"
	I1123 08:02:01.923815       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:02:01.923820       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:02:01.923853       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:02:01.923866       1 config.go:309] "Starting node config controller"
	I1123 08:02:01.923878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:02:01.923882       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:02:01.923886       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:02:01.923887       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:02:02.024283       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:02:02.024306       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:02:02.024330       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [897d530ad176e94c2f79b959280a3d7ef08c1dc4e25d7550a84eb784a70f43b2] <==
	E1123 08:01:53.647533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:01:53.647598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:01:53.647614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:01:53.647725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:01:53.647737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:01:53.647724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:01:53.647598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:01:53.647864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:01:53.647920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:01:54.501450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:01:54.529268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:01:54.565434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:01:54.585362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:01:54.622420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:01:54.629352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:01:54.644709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:01:54.816218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:01:54.860559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1123 08:01:56.745243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:03:25.128872       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 08:03:25.128931       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:03:25.128978       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 08:03:25.129027       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 08:03:25.129070       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 08:03:25.129109       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fab0f7342d7c7019baf3192cf7408f27996b96a65375eeb72a1974adb2463833] <==
	I1123 08:03:27.983617       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:03:28.499606       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:03:28.499628       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:03:28.504841       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:03:28.505063       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:03:28.506376       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:03:28.506398       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:03:28.506432       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:03:28.506441       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:03:28.506655       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:03:28.507051       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:03:28.607066       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:03:28.607069       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:03:28.611358       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	E1123 08:03:51.578964       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	
	
	==> kubelet <==
	Nov 23 08:12:10 functional-762247 kubelet[4172]: E1123 08:12:10.991501    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:12:20 functional-762247 kubelet[4172]: E1123 08:12:20.991283    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	Nov 23 08:12:25 functional-762247 kubelet[4172]: E1123 08:12:25.990784    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:12:35 functional-762247 kubelet[4172]: E1123 08:12:35.991082    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	Nov 23 08:12:39 functional-762247 kubelet[4172]: E1123 08:12:39.990554    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:12:48 functional-762247 kubelet[4172]: E1123 08:12:48.990755    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	Nov 23 08:12:54 functional-762247 kubelet[4172]: E1123 08:12:54.990872    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:13:00 functional-762247 kubelet[4172]: E1123 08:13:00.991159    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	Nov 23 08:13:07 functional-762247 kubelet[4172]: E1123 08:13:07.990919    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:13:15 functional-762247 kubelet[4172]: E1123 08:13:15.991317    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	Nov 23 08:13:19 functional-762247 kubelet[4172]: E1123 08:13:19.990343    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:13:30 functional-762247 kubelet[4172]: E1123 08:13:30.990949    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	Nov 23 08:13:30 functional-762247 kubelet[4172]: E1123 08:13:30.991034    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:13:42 functional-762247 kubelet[4172]: E1123 08:13:42.990818    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:13:44 functional-762247 kubelet[4172]: E1123 08:13:44.991084    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	Nov 23 08:13:54 functional-762247 kubelet[4172]: E1123 08:13:54.991180    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:13:57 functional-762247 kubelet[4172]: E1123 08:13:57.991225    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	Nov 23 08:14:05 functional-762247 kubelet[4172]: E1123 08:14:05.991417    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:14:08 functional-762247 kubelet[4172]: E1123 08:14:08.990787    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	Nov 23 08:14:17 functional-762247 kubelet[4172]: E1123 08:14:17.991122    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:14:22 functional-762247 kubelet[4172]: E1123 08:14:22.993187    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	Nov 23 08:14:31 functional-762247 kubelet[4172]: E1123 08:14:31.991077    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:14:35 functional-762247 kubelet[4172]: E1123 08:14:35.990847    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	Nov 23 08:14:43 functional-762247 kubelet[4172]: E1123 08:14:43.991901    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-lm72j" podUID="ed72bdd6-61e2-4de2-8449-6633d8405528"
	Nov 23 08:14:48 functional-762247 kubelet[4172]: E1123 08:14:48.991120    4172 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qqdxc" podUID="064199d7-f884-4a50-a8fd-fc1d8f55c128"
	
	
	==> kubernetes-dashboard [23d2d9ce4cd1f255759836f627c677a0591b7d00e98a7c4b624b5a741c41a180] <==
	2025/11/23 08:05:23 Using namespace: kubernetes-dashboard
	2025/11/23 08:05:23 Using in-cluster config to connect to apiserver
	2025/11/23 08:05:23 Using secret token for csrf signing
	2025/11/23 08:05:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:05:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:05:23 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:05:23 Generating JWE encryption key
	2025/11/23 08:05:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:05:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:05:23 Initializing JWE encryption key from synchronized object
	2025/11/23 08:05:23 Creating in-cluster Sidecar client
	2025/11/23 08:05:23 Successful request to sidecar
	2025/11/23 08:05:23 Serving insecurely on HTTP port: 9090
	2025/11/23 08:05:23 Starting overwatch
	
	
	==> storage-provisioner [a8b676cb26dbc49261a3c07cd680d89228509d8ef01103c4760253bc483d1142] <==
	W1123 08:14:27.816427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:29.819369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:29.822804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:31.825531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:31.830353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:33.833534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:33.836922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:35.839997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:35.844642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:37.847401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:37.850937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:39.853711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:39.857745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:41.860313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:41.864155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:43.866915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:43.870397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:45.872714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:45.876424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:47.878910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:47.882297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:49.884759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:49.889036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:51.891377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:14:51.895278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d7ab42dc35495d00b3347a60711052633d38beb7bacea9924ad5523cb4a0f356] <==
	I1123 08:03:15.083247       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:03:15.084709       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-762247 -n functional-762247
helpers_test.go:269: (dbg) Run:  kubectl --context functional-762247 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-qqdxc hello-node-connect-7d85dfc575-lm72j
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-762247 describe pod busybox-mount hello-node-75c85bcc94-qqdxc hello-node-connect-7d85dfc575-lm72j
helpers_test.go:290: (dbg) kubectl --context functional-762247 describe pod busybox-mount hello-node-75c85bcc94-qqdxc hello-node-connect-7d85dfc575-lm72j:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-762247/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 08:05:00 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://28d890d0f400f64563bf537ba9c569361ca84ec4fe73de2706b4c819e01dd0ec
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 23 Nov 2025 08:05:01 +0000
	      Finished:     Sun, 23 Nov 2025 08:05:01 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-htnvb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-htnvb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m53s  default-scheduler  Successfully assigned default/busybox-mount to functional-762247
	  Normal  Pulling    9m53s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m53s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 621ms (621ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m53s  kubelet            Created container: mount-munger
	  Normal  Started    9m53s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-qqdxc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-762247/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 08:04:55 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6rqtm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6rqtm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m58s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-qqdxc to functional-762247
	  Normal   Pulling    7m8s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m48s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m48s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-lm72j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-762247/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 08:04:51 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cb5mf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cb5mf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-lm72j to functional-762247
	  Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m57s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m57s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image load --daemon kicbase/echo-server:functional-762247 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-762247" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image load --daemon kicbase/echo-server:functional-762247 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-762247" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-762247
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image load --daemon kicbase/echo-server:functional-762247 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-762247" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image save kicbase/echo-server:functional-762247 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1123 08:04:54.703015   48812 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:04:54.703327   48812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:04:54.703338   48812 out.go:374] Setting ErrFile to fd 2...
	I1123 08:04:54.703342   48812 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:04:54.703564   48812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:04:54.704135   48812 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:04:54.704253   48812 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:04:54.704661   48812 cli_runner.go:164] Run: docker container inspect functional-762247 --format={{.State.Status}}
	I1123 08:04:54.722006   48812 ssh_runner.go:195] Run: systemctl --version
	I1123 08:04:54.722042   48812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-762247
	I1123 08:04:54.738441   48812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/functional-762247/id_rsa Username:docker}
	I1123 08:04:54.835364   48812 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1123 08:04:54.835426   48812 cache_images.go:255] Failed to load cached images for "functional-762247": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1123 08:04:54.835446   48812 cache_images.go:267] failed pushing to: functional-762247

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-762247
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image save --daemon kicbase/echo-server:functional-762247 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-762247
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-762247: exit status 1 (16.228178ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-762247

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-762247

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-762247 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-762247 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-qqdxc" [064199d7-f884-4a50-a8fd-fc1d8f55c128] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-762247 -n functional-762247
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-23 08:14:55.604726507 +0000 UTC m=+1172.104177225
functional_test.go:1460: (dbg) Run:  kubectl --context functional-762247 describe po hello-node-75c85bcc94-qqdxc -n default
functional_test.go:1460: (dbg) kubectl --context functional-762247 describe po hello-node-75c85bcc94-qqdxc -n default:
Name:             hello-node-75c85bcc94-qqdxc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-762247/192.168.49.2
Start Time:       Sun, 23 Nov 2025 08:04:55 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6rqtm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6rqtm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-qqdxc to functional-762247
Normal   Pulling    7m9s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-762247 logs hello-node-75c85bcc94-qqdxc -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-762247 logs hello-node-75c85bcc94-qqdxc -n default: exit status 1 (57.040261ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-qqdxc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-762247 logs hello-node-75c85bcc94-qqdxc -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 service --namespace=default --https --url hello-node: exit status 115 (524.471954ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31177
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-762247 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 service hello-node --url --format={{.IP}}: exit status 115 (522.567557ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-762247 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 service hello-node --url: exit status 115 (518.97011ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31177
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-762247 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31177
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.38s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-896921 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-896921 --output=json --user=testUser: exit status 80 (2.382329516s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f7991ec1-dbde-4c22-bb78-1c03356ce215","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-896921 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e095d9a9-f16b-40be-84be-865eafbccd72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T08:24:57Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"1e8df3ab-a51c-4afb-a270-a5e02dcd70f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-896921 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.38s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-896921 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-896921 --output=json --user=testUser: exit status 80 (1.626697932s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9548fd50-1ac5-4370-89b2-191307fbe292","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-896921 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"56fe6020-7e13-469d-a8a1-6cbd58498783","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T08:24:58Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"c2386a0b-a6be-4964-b9db-8eb6499de0a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-896921 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.63s)

                                                
                                    
x
+
TestPause/serial/Pause (8.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-716098 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-716098 --alsologtostderr -v=5: exit status 80 (1.746851786s)

                                                
                                                
-- stdout --
	* Pausing node pause-716098 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:37:57.889889  209341 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:37:57.890229  209341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:37:57.890241  209341 out.go:374] Setting ErrFile to fd 2...
	I1123 08:37:57.890248  209341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:37:57.890529  209341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:37:57.890848  209341 out.go:368] Setting JSON to false
	I1123 08:37:57.890874  209341 mustload.go:66] Loading cluster: pause-716098
	I1123 08:37:57.891418  209341 config.go:182] Loaded profile config "pause-716098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:37:57.892009  209341 cli_runner.go:164] Run: docker container inspect pause-716098 --format={{.State.Status}}
	I1123 08:37:57.914801  209341 host.go:66] Checking if "pause-716098" exists ...
	I1123 08:37:57.915184  209341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:37:57.977056  209341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:37:57.966572317 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:37:57.977644  209341 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-716098 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 08:37:57.980129  209341 out.go:179] * Pausing node pause-716098 ... 
	I1123 08:37:57.981097  209341 host.go:66] Checking if "pause-716098" exists ...
	I1123 08:37:57.981362  209341 ssh_runner.go:195] Run: systemctl --version
	I1123 08:37:57.981409  209341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-716098
	I1123 08:37:57.998964  209341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/pause-716098/id_rsa Username:docker}
	I1123 08:37:58.100453  209341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:37:58.112812  209341 pause.go:52] kubelet running: true
	I1123 08:37:58.112868  209341 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:37:58.278579  209341 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:37:58.278754  209341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:37:58.361087  209341 cri.go:89] found id: "a52c570c00b3c3ae086eca845946a180891d48e3d97464103b3987d988c25812"
	I1123 08:37:58.361109  209341 cri.go:89] found id: "3b91434da8e2424aa2abc9c2b29c3bff72cd58d0ef5c628934c46e0bb7ee4f74"
	I1123 08:37:58.361115  209341 cri.go:89] found id: "3142ab901cf20766a2af10228926409fa9b71496197851ff1d4bbe355dc29f0e"
	I1123 08:37:58.361120  209341 cri.go:89] found id: "a0645edd38940accf406db609c29284d37f09d717f50532bc8d93333044d21af"
	I1123 08:37:58.361126  209341 cri.go:89] found id: "b1d6f513b9fd574eaf72e9455cc6b87b3204d19841888239ce70a30b092c7a8f"
	I1123 08:37:58.361130  209341 cri.go:89] found id: "fa9a01040e3eb099588692c065781ee7e96ce362f87dac90c505e830351cf439"
	I1123 08:37:58.361135  209341 cri.go:89] found id: "cd12fe4fa1b43a5e3f17760eec386c2a2ea817d3934efdc29f8514942ee61362"
	I1123 08:37:58.361140  209341 cri.go:89] found id: ""
	I1123 08:37:58.361196  209341 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:37:58.374861  209341 retry.go:31] will retry after 221.90594ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:37:58Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:37:58.597374  209341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:37:58.615818  209341 pause.go:52] kubelet running: false
	I1123 08:37:58.615940  209341 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:37:58.733436  209341 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:37:58.733505  209341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:37:58.805419  209341 cri.go:89] found id: "a52c570c00b3c3ae086eca845946a180891d48e3d97464103b3987d988c25812"
	I1123 08:37:58.805440  209341 cri.go:89] found id: "3b91434da8e2424aa2abc9c2b29c3bff72cd58d0ef5c628934c46e0bb7ee4f74"
	I1123 08:37:58.805446  209341 cri.go:89] found id: "3142ab901cf20766a2af10228926409fa9b71496197851ff1d4bbe355dc29f0e"
	I1123 08:37:58.805452  209341 cri.go:89] found id: "a0645edd38940accf406db609c29284d37f09d717f50532bc8d93333044d21af"
	I1123 08:37:58.805457  209341 cri.go:89] found id: "b1d6f513b9fd574eaf72e9455cc6b87b3204d19841888239ce70a30b092c7a8f"
	I1123 08:37:58.805462  209341 cri.go:89] found id: "fa9a01040e3eb099588692c065781ee7e96ce362f87dac90c505e830351cf439"
	I1123 08:37:58.805466  209341 cri.go:89] found id: "cd12fe4fa1b43a5e3f17760eec386c2a2ea817d3934efdc29f8514942ee61362"
	I1123 08:37:58.805471  209341 cri.go:89] found id: ""
	I1123 08:37:58.805523  209341 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:37:58.817449  209341 retry.go:31] will retry after 531.549965ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:37:58Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:37:59.349887  209341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:37:59.363704  209341 pause.go:52] kubelet running: false
	I1123 08:37:59.363775  209341 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:37:59.479045  209341 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:37:59.479130  209341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:37:59.549571  209341 cri.go:89] found id: "a52c570c00b3c3ae086eca845946a180891d48e3d97464103b3987d988c25812"
	I1123 08:37:59.549594  209341 cri.go:89] found id: "3b91434da8e2424aa2abc9c2b29c3bff72cd58d0ef5c628934c46e0bb7ee4f74"
	I1123 08:37:59.549600  209341 cri.go:89] found id: "3142ab901cf20766a2af10228926409fa9b71496197851ff1d4bbe355dc29f0e"
	I1123 08:37:59.549606  209341 cri.go:89] found id: "a0645edd38940accf406db609c29284d37f09d717f50532bc8d93333044d21af"
	I1123 08:37:59.549610  209341 cri.go:89] found id: "b1d6f513b9fd574eaf72e9455cc6b87b3204d19841888239ce70a30b092c7a8f"
	I1123 08:37:59.549615  209341 cri.go:89] found id: "fa9a01040e3eb099588692c065781ee7e96ce362f87dac90c505e830351cf439"
	I1123 08:37:59.549620  209341 cri.go:89] found id: "cd12fe4fa1b43a5e3f17760eec386c2a2ea817d3934efdc29f8514942ee61362"
	I1123 08:37:59.549624  209341 cri.go:89] found id: ""
	I1123 08:37:59.549715  209341 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:37:59.563704  209341 out.go:203] 
	W1123 08:37:59.564582  209341 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:37:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:37:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:37:59.564599  209341 out.go:285] * 
	* 
	W1123 08:37:59.569088  209341 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:37:59.571808  209341 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-716098 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-716098
helpers_test.go:243: (dbg) docker inspect pause-716098:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990",
	        "Created": "2025-11-23T08:36:43.197505702Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186494,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:36:43.242189862Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990/hostname",
	        "HostsPath": "/var/lib/docker/containers/880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990/hosts",
	        "LogPath": "/var/lib/docker/containers/880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990/880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990-json.log",
	        "Name": "/pause-716098",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-716098:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-716098",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990",
	                "LowerDir": "/var/lib/docker/overlay2/cfe2c8eba195bcd30e7e38bbd52d91928d4defd3a6f50b62decdca38c153a65e-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfe2c8eba195bcd30e7e38bbd52d91928d4defd3a6f50b62decdca38c153a65e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfe2c8eba195bcd30e7e38bbd52d91928d4defd3a6f50b62decdca38c153a65e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfe2c8eba195bcd30e7e38bbd52d91928d4defd3a6f50b62decdca38c153a65e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-716098",
	                "Source": "/var/lib/docker/volumes/pause-716098/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-716098",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-716098",
	                "name.minikube.sigs.k8s.io": "pause-716098",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "71922cdb62c7fe8ef51d5d9274663e3cf934d647289420ad4d9de05dc14b0adb",
	            "SandboxKey": "/var/run/docker/netns/71922cdb62c7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-716098": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95b2e60b45dbc2d26f29e43eb662f327a8c44de55025876ae25398b424d3bba1",
	                    "EndpointID": "dd61500d022b0bae608fd7dc575c6820a160eb7bfd2860e3f7d6ea863f03c3d2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "b2:84:b5:fc:76:97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-716098",
	                        "880cb9d38f96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-716098 -n pause-716098
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-716098 -n pause-716098: exit status 2 (354.292905ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-716098 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-716098 logs -n 25: (2.651471207s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-351793 sudo cri-dockerd --version                                                                                                                                                                               │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                 │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo systemctl cat containerd --no-pager                                                                                                                                                                 │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                          │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo cat /etc/containerd/config.toml                                                                                                                                                                     │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo containerd config dump                                                                                                                                                                              │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo systemctl status crio --all --full --no-pager                                                                                                                                                       │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo systemctl cat crio --no-pager                                                                                                                                                                       │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                             │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo crio config                                                                                                                                                                                         │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ delete  │ -p cilium-351793                                                                                                                                                                                                          │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ start   │ -p cert-options-795018 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-795018       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ delete  │ -p NoKubernetes-840508                                                                                                                                                                                                    │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ start   │ -p NoKubernetes-840508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                     │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ ssh     │ -p NoKubernetes-840508 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ stop    │ -p NoKubernetes-840508                                                                                                                                                                                                    │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ start   │ -p NoKubernetes-840508 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ start   │ -p pause-716098 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-716098              │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ ssh     │ -p NoKubernetes-840508 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ delete  │ -p NoKubernetes-840508                                                                                                                                                                                                    │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ ssh     │ cert-options-795018 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-795018       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ ssh     │ -p cert-options-795018 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-795018       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ pause   │ -p pause-716098 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-716098              │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ delete  │ -p cert-options-795018                                                                                                                                                                                                    │ cert-options-795018       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ start   │ -p force-systemd-flag-170661 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-170661 │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:37:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:37:58.478221  209809 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:37:58.478340  209809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:37:58.478350  209809 out.go:374] Setting ErrFile to fd 2...
	I1123 08:37:58.478354  209809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:37:58.478553  209809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:37:58.479031  209809 out.go:368] Setting JSON to false
	I1123 08:37:58.480167  209809 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4825,"bootTime":1763882253,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:37:58.480220  209809 start.go:143] virtualization: kvm guest
	I1123 08:37:58.481995  209809 out.go:179] * [force-systemd-flag-170661] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:37:58.483403  209809 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:37:58.483401  209809 notify.go:221] Checking for updates...
	I1123 08:37:58.485326  209809 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:37:58.486262  209809 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:37:58.487148  209809 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:37:58.488065  209809 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:37:58.488981  209809 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:37:58.490319  209809 config.go:182] Loaded profile config "cert-expiration-747782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:37:58.490462  209809 config.go:182] Loaded profile config "cert-options-795018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:37:58.490633  209809 config.go:182] Loaded profile config "pause-716098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:37:58.490754  209809 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:37:58.516937  209809 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:37:58.517037  209809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:37:58.580533  209809 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:37:58.566788127 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:37:58.580677  209809 docker.go:319] overlay module found
	I1123 08:37:58.582186  209809 out.go:179] * Using the docker driver based on user configuration
	I1123 08:37:58.583260  209809 start.go:309] selected driver: docker
	I1123 08:37:58.583297  209809 start.go:927] validating driver "docker" against <nil>
	I1123 08:37:58.583314  209809 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:37:58.584029  209809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:37:58.658304  209809 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:37:58.640906497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:37:58.658524  209809 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:37:58.658818  209809 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 08:37:58.661873  209809 out.go:179] * Using Docker driver with root privileges
	I1123 08:37:58.663681  209809 cni.go:84] Creating CNI manager for ""
	I1123 08:37:58.663817  209809 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:37:58.663831  209809 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:37:58.663914  209809 start.go:353] cluster config:
	{Name:force-systemd-flag-170661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-170661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:37:58.665054  209809 out.go:179] * Starting "force-systemd-flag-170661" primary control-plane node in "force-systemd-flag-170661" cluster
	I1123 08:37:58.665995  209809 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:37:58.667037  209809 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:37:58.668045  209809 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:37:58.668080  209809 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:37:58.668087  209809 cache.go:65] Caching tarball of preloaded images
	I1123 08:37:58.668140  209809 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:37:58.668182  209809 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:37:58.668197  209809 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:37:58.668348  209809 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/force-systemd-flag-170661/config.json ...
	I1123 08:37:58.668378  209809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/force-systemd-flag-170661/config.json: {Name:mkf234de72e68aad6ac5e10be72084af3c851a25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:37:58.687648  209809 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:37:58.687663  209809 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:37:58.687678  209809 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:37:58.687720  209809 start.go:360] acquireMachinesLock for force-systemd-flag-170661: {Name:mk811687618e8422e085a20a24d445d0a7ce2f0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:37:58.687810  209809 start.go:364] duration metric: took 69.816µs to acquireMachinesLock for "force-systemd-flag-170661"
	I1123 08:37:58.687848  209809 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-170661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-170661 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:37:58.687921  209809 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.713930924Z" level=info msg="RDT not available in the host system"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.713943788Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.714810705Z" level=info msg="Conmon does support the --sync option"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.714831077Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.714847531Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.715758517Z" level=info msg="Conmon does support the --sync option"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.715778033Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.719855815Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.719875491Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.720338754Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.72071548Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.720765413Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.795994251Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-h9w4d Namespace:kube-system ID:2ea2822c9733ce277fbb7c43a7f6c85ec77e8cce9ea1137e5b49f1e0ea9eb162 UID:6180f67a-cff6-4d6b-88c9-8f9f44293a04 NetNS:/var/run/netns/0bccb5de-5512-46a3-8b21-d671f5c29745 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000892288}] Aliases:map[]}"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796145456Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-h9w4d for CNI network kindnet (type=ptp)"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796513703Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796534737Z" level=info msg="Starting seccomp notifier watcher"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796582264Z" level=info msg="Create NRI interface"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796672009Z" level=info msg="built-in NRI default validator is disabled"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796701394Z" level=info msg="runtime interface created"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796715222Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796722419Z" level=info msg="runtime interface starting up..."
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796729959Z" level=info msg="starting plugins..."
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796744115Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796988541Z" level=info msg="No systemd watchdog enabled"
	Nov 23 08:37:54 pause-716098 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a52c570c00b3c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago       Running             coredns                   0                   2ea2822c9733c       coredns-66bc5c9577-h9w4d               kube-system
	3b91434da8e24       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   52 seconds ago       Running             kindnet-cni               0                   58152b038ad91       kindnet-t9qph                          kube-system
	3142ab901cf20       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   52 seconds ago       Running             kube-proxy                0                   dc8ebf704f566       kube-proxy-dm88x                       kube-system
	a0645edd38940       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   5d0581a890f76       kube-controller-manager-pause-716098   kube-system
	b1d6f513b9fd5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   a6e2b86d6640d       kube-apiserver-pause-716098            kube-system
	fa9a01040e3eb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   164efa411434c       etcd-pause-716098                      kube-system
	cd12fe4fa1b43       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   3761ebce7c6ba       kube-scheduler-pause-716098            kube-system
	
	
	==> coredns [a52c570c00b3c3ae086eca845946a180891d48e3d97464103b3987d988c25812] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39594 - 23494 "HINFO IN 4397778813889748474.6200094615082072776. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058492342s
	
	
	==> describe nodes <==
	Name:               pause-716098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-716098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=pause-716098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_37_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:36:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-716098
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:37:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:37:52 +0000   Sun, 23 Nov 2025 08:36:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:37:52 +0000   Sun, 23 Nov 2025 08:36:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:37:52 +0000   Sun, 23 Nov 2025 08:36:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:37:52 +0000   Sun, 23 Nov 2025 08:37:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-716098
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                eb958d43-02ba-4af1-a7b0-45e8d97c885e
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-h9w4d                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     54s
	  kube-system                 etcd-pause-716098                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         59s
	  kube-system                 kindnet-t9qph                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-pause-716098             250m (3%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-pause-716098    200m (2%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-dm88x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-pause-716098             100m (1%)     0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 52s   kube-proxy       
	  Normal  Starting                 60s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s   kubelet          Node pause-716098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s   kubelet          Node pause-716098 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s   kubelet          Node pause-716098 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s   node-controller  Node pause-716098 event: Registered Node pause-716098 in Controller
	  Normal  NodeReady                12s   kubelet          Node pause-716098 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.079858] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024030] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.151122] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 07:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.034290] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +2.047767] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +4.031598] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +8.127154] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[ +16.382339] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[Nov23 07:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	
	
	==> etcd [fa9a01040e3eb099588692c065781ee7e96ce362f87dac90c505e830351cf439] <==
	{"level":"warn","ts":"2025-11-23T08:37:07.890846Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.056672ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790207191074638 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kindnet.187a95e7f5407483\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet.187a95e7f5407483\" value_size:586 lease:4650418170336298560 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:37:07.890872Z","caller":"traceutil/trace.go:172","msg":"trace[1757969666] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:332; }","duration":"270.275686ms","start":"2025-11-23T08:37:07.620586Z","end":"2025-11-23T08:37:07.890862Z","steps":["trace[1757969666] 'agreement among raft nodes before linearized reading'  (duration: 108.228099ms)","trace[1757969666] 'range keys from in-memory index tree'  (duration: 161.887182ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:37:07.891000Z","caller":"traceutil/trace.go:172","msg":"trace[639521649] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"315.890492ms","start":"2025-11-23T08:37:07.575094Z","end":"2025-11-23T08:37:07.890985Z","steps":["trace[639521649] 'process raft request'  (duration: 153.65399ms)","trace[639521649] 'compare'  (duration: 161.95955ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:37:07.891039Z","caller":"traceutil/trace.go:172","msg":"trace[914348219] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"270.258921ms","start":"2025-11-23T08:37:07.620775Z","end":"2025-11-23T08:37:07.891034Z","steps":["trace[914348219] 'process raft request'  (duration: 270.177857ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:37:07.891033Z","caller":"traceutil/trace.go:172","msg":"trace[821324252] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"268.806865ms","start":"2025-11-23T08:37:07.622217Z","end":"2025-11-23T08:37:07.891024Z","steps":["trace[821324252] 'process raft request'  (duration: 268.765601ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:37:07.891041Z","caller":"traceutil/trace.go:172","msg":"trace[1846471931] linearizableReadLoop","detail":"{readStateIndex:345; appliedIndex:343; }","duration":"162.231598ms","start":"2025-11-23T08:37:07.728794Z","end":"2025-11-23T08:37:07.891025Z","steps":["trace[1846471931] 'read index received'  (duration: 138.086219ms)","trace[1846471931] 'applied index is now lower than readState.Index'  (duration: 24.143689ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:37:07.891069Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:37:07.575083Z","time spent":"315.956052ms","remote":"127.0.0.1:54106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":657,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kindnet.187a95e7f5407483\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet.187a95e7f5407483\" value_size:586 lease:4650418170336298560 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:37:07.891015Z","caller":"traceutil/trace.go:172","msg":"trace[347522216] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"315.394004ms","start":"2025-11-23T08:37:07.575608Z","end":"2025-11-23T08:37:07.891002Z","steps":["trace[347522216] 'process raft request'  (duration: 315.300414ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:37:07.891184Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:37:07.575602Z","time spent":"315.563555ms","remote":"127.0.0.1:54106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":691,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-dm88x.187a95e7f67b08f1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-dm88x.187a95e7f67b08f1\" value_size:611 lease:4650418170336298560 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T08:37:07.891202Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"230.341918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-23T08:37:07.891232Z","caller":"traceutil/trace.go:172","msg":"trace[1289611047] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:336; }","duration":"230.377476ms","start":"2025-11-23T08:37:07.660846Z","end":"2025-11-23T08:37:07.891223Z","steps":["trace[1289611047] 'agreement among raft nodes before linearized reading'  (duration: 230.2686ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:37:08.029237Z","caller":"traceutil/trace.go:172","msg":"trace[880256516] linearizableReadLoop","detail":"{readStateIndex:349; appliedIndex:349; }","duration":"110.179076ms","start":"2025-11-23T08:37:07.919035Z","end":"2025-11-23T08:37:08.029215Z","steps":["trace[880256516] 'read index received'  (duration: 110.17209ms)","trace[880256516] 'applied index is now lower than readState.Index'  (duration: 5.919µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:37:08.186886Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"267.829488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-t9qph\" limit:1 ","response":"range_response_count:1 size:3692"}
	{"level":"info","ts":"2025-11-23T08:37:08.186948Z","caller":"traceutil/trace.go:172","msg":"trace[1963481777] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-t9qph; range_end:; response_count:1; response_revision:338; }","duration":"267.903731ms","start":"2025-11-23T08:37:07.919030Z","end":"2025-11-23T08:37:08.186934Z","steps":["trace[1963481777] 'agreement among raft nodes before linearized reading'  (duration: 110.373761ms)","trace[1963481777] 'range keys from in-memory index tree'  (duration: 157.356103ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:37:08.187370Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.907894ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790207191074649 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:336 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4235 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:37:08.187532Z","caller":"traceutil/trace.go:172","msg":"trace[1947669207] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"267.022253ms","start":"2025-11-23T08:37:07.920499Z","end":"2025-11-23T08:37:08.187521Z","steps":["trace[1947669207] 'process raft request'  (duration: 266.965375ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:37:08.187540Z","caller":"traceutil/trace.go:172","msg":"trace[402077253] linearizableReadLoop","detail":"{readStateIndex:350; appliedIndex:349; }","duration":"158.080996ms","start":"2025-11-23T08:37:08.029448Z","end":"2025-11-23T08:37:08.187529Z","steps":["trace[402077253] 'read index received'  (duration: 155.363482ms)","trace[402077253] 'applied index is now lower than readState.Index'  (duration: 2.716716ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:37:08.187619Z","caller":"traceutil/trace.go:172","msg":"trace[1082486262] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"272.12721ms","start":"2025-11-23T08:37:07.915480Z","end":"2025-11-23T08:37:08.187607Z","steps":["trace[1082486262] 'process raft request'  (duration: 113.916701ms)","trace[1082486262] 'compare'  (duration: 157.709086ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:37:08.187776Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"209.913128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-11-23T08:37:08.187944Z","caller":"traceutil/trace.go:172","msg":"trace[1541705457] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:340; }","duration":"210.086281ms","start":"2025-11-23T08:37:07.977849Z","end":"2025-11-23T08:37:08.187935Z","steps":["trace[1541705457] 'agreement among raft nodes before linearized reading'  (duration: 209.820428ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:37:08.187862Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.945784ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2025-11-23T08:37:08.188050Z","caller":"traceutil/trace.go:172","msg":"trace[144208957] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:340; }","duration":"115.133083ms","start":"2025-11-23T08:37:08.072907Z","end":"2025-11-23T08:37:08.188040Z","steps":["trace[144208957] 'agreement among raft nodes before linearized reading'  (duration: 114.894036ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:37:08.187865Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.878341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-11-23T08:37:08.188134Z","caller":"traceutil/trace.go:172","msg":"trace[1461514322] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:340; }","duration":"181.148319ms","start":"2025-11-23T08:37:08.006975Z","end":"2025-11-23T08:37:08.188123Z","steps":["trace[1461514322] 'agreement among raft nodes before linearized reading'  (duration: 180.817993ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:38:01.146870Z","caller":"traceutil/trace.go:172","msg":"trace[1156819154] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"152.101566ms","start":"2025-11-23T08:38:00.994754Z","end":"2025-11-23T08:38:01.146855Z","steps":["trace[1156819154] 'process raft request'  (duration: 130.1779ms)","trace[1156819154] 'compare'  (duration: 21.847669ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:38:01 up  1:20,  0 user,  load average: 2.55, 1.76, 1.26
	Linux pause-716098 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3b91434da8e2424aa2abc9c2b29c3bff72cd58d0ef5c628934c46e0bb7ee4f74] <==
	I1123 08:37:08.796012       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:37:08.796256       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:37:08.796384       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:37:08.796405       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:37:08.796416       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:37:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:37:08.996502       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:37:08.996535       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:37:08.996548       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:37:08.996671       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:37:38.997679       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:37:38.997704       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:37:38.997728       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:37:38.997731       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 08:37:40.197563       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:37:40.197589       1 metrics.go:72] Registering metrics
	I1123 08:37:40.197667       1 controller.go:711] "Syncing nftables rules"
	I1123 08:37:49.002963       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:37:49.003013       1 main.go:301] handling current node
	I1123 08:37:59.000762       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:37:59.000793       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b1d6f513b9fd574eaf72e9455cc6b87b3204d19841888239ce70a30b092c7a8f] <==
	I1123 08:36:59.571471       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:36:59.575280       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 08:36:59.575315       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 08:36:59.589282       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:36:59.593637       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:36:59.602525       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:36:59.604287       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:36:59.745589       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:37:00.373455       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:37:00.377454       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:37:00.377472       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:37:00.839670       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:37:00.869160       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:37:00.978389       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:37:00.984027       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 08:37:00.984919       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:37:00.988612       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:37:01.432858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:37:02.082726       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:37:02.092464       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:37:02.100159       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:37:06.971272       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:37:07.191927       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:37:07.431255       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:37:07.557000       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a0645edd38940accf406db609c29284d37f09d717f50532bc8d93333044d21af] <==
	I1123 08:37:06.433525       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:37:06.433531       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:37:06.434542       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:37:06.435715       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:37:06.453197       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:37:06.459341       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:37:06.460553       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:37:06.464728       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:37:06.465789       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:37:06.472037       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:37:06.475234       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:37:06.477453       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:37:06.479708       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:37:06.479727       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:37:06.479800       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:37:06.479904       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-716098"
	I1123 08:37:06.479979       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:37:06.480184       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:37:06.480201       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:37:06.480208       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:37:06.480301       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:37:06.483134       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:37:06.494594       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:37:06.599031       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-716098" podCIDRs=["10.244.0.0/24"]
	I1123 08:37:51.487068       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3142ab901cf20766a2af10228926409fa9b71496197851ff1d4bbe355dc29f0e] <==
	I1123 08:37:08.614185       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:37:08.675272       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:37:08.777153       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:37:08.777255       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 08:37:08.777379       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:37:08.801432       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:37:08.801476       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:37:08.808624       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:37:08.809045       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:37:08.809337       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:37:08.810995       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:37:08.811496       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:37:08.811626       1 config.go:200] "Starting service config controller"
	I1123 08:37:08.811663       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:37:08.811656       1 config.go:309] "Starting node config controller"
	I1123 08:37:08.811934       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:37:08.811950       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:37:08.811813       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:37:08.811958       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:37:08.912029       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:37:08.912043       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:37:08.912084       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [cd12fe4fa1b43a5e3f17760eec386c2a2ea817d3934efdc29f8514942ee61362] <==
	E1123 08:36:59.511376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:36:59.515320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:36:59.515572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:36:59.515647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:36:59.515739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:36:59.516922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:36:59.519285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:36:59.519433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:36:59.519827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:36:59.519921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:36:59.520003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:36:59.520058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:36:59.520134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:36:59.520195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:36:59.520251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:36:59.520310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:36:59.520377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:36:59.520437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:36:59.520781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:37:00.397911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:37:00.617353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:37:00.636345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:37:00.657847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:37:00.685614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1123 08:37:01.007076       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:37:06 pause-716098 kubelet[1296]: I1123 08:37:06.694260    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:37:06 pause-716098 kubelet[1296]: I1123 08:37:06.694970    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.869207    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6157e6eb-3223-4d9d-a075-bcff09fb2266-kube-proxy\") pod \"kube-proxy-dm88x\" (UID: \"6157e6eb-3223-4d9d-a075-bcff09fb2266\") " pod="kube-system/kube-proxy-dm88x"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.869257    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6157e6eb-3223-4d9d-a075-bcff09fb2266-xtables-lock\") pod \"kube-proxy-dm88x\" (UID: \"6157e6eb-3223-4d9d-a075-bcff09fb2266\") " pod="kube-system/kube-proxy-dm88x"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.869283    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6157e6eb-3223-4d9d-a075-bcff09fb2266-lib-modules\") pod \"kube-proxy-dm88x\" (UID: \"6157e6eb-3223-4d9d-a075-bcff09fb2266\") " pod="kube-system/kube-proxy-dm88x"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.869315    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bc2\" (UniqueName: \"kubernetes.io/projected/6157e6eb-3223-4d9d-a075-bcff09fb2266-kube-api-access-v4bc2\") pod \"kube-proxy-dm88x\" (UID: \"6157e6eb-3223-4d9d-a075-bcff09fb2266\") " pod="kube-system/kube-proxy-dm88x"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.970038    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvq4d\" (UniqueName: \"kubernetes.io/projected/2bf21882-3e20-4791-8817-830f3ed23c83-kube-api-access-gvq4d\") pod \"kindnet-t9qph\" (UID: \"2bf21882-3e20-4791-8817-830f3ed23c83\") " pod="kube-system/kindnet-t9qph"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.970098    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bf21882-3e20-4791-8817-830f3ed23c83-xtables-lock\") pod \"kindnet-t9qph\" (UID: \"2bf21882-3e20-4791-8817-830f3ed23c83\") " pod="kube-system/kindnet-t9qph"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.970131    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bf21882-3e20-4791-8817-830f3ed23c83-lib-modules\") pod \"kindnet-t9qph\" (UID: \"2bf21882-3e20-4791-8817-830f3ed23c83\") " pod="kube-system/kindnet-t9qph"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.970171    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2bf21882-3e20-4791-8817-830f3ed23c83-cni-cfg\") pod \"kindnet-t9qph\" (UID: \"2bf21882-3e20-4791-8817-830f3ed23c83\") " pod="kube-system/kindnet-t9qph"
	Nov 23 08:37:09 pause-716098 kubelet[1296]: I1123 08:37:09.010868    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-t9qph" podStartSLOduration=2.010841389 podStartE2EDuration="2.010841389s" podCreationTimestamp="2025-11-23 08:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:37:08.999331938 +0000 UTC m=+7.143970145" watchObservedRunningTime="2025-11-23 08:37:09.010841389 +0000 UTC m=+7.155479593"
	Nov 23 08:37:09 pause-716098 kubelet[1296]: I1123 08:37:09.984835    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dm88x" podStartSLOduration=2.984816677 podStartE2EDuration="2.984816677s" podCreationTimestamp="2025-11-23 08:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:37:09.025122188 +0000 UTC m=+7.169760392" watchObservedRunningTime="2025-11-23 08:37:09.984816677 +0000 UTC m=+8.129454875"
	Nov 23 08:37:49 pause-716098 kubelet[1296]: I1123 08:37:49.437772    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:37:49 pause-716098 kubelet[1296]: I1123 08:37:49.574400    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6180f67a-cff6-4d6b-88c9-8f9f44293a04-config-volume\") pod \"coredns-66bc5c9577-h9w4d\" (UID: \"6180f67a-cff6-4d6b-88c9-8f9f44293a04\") " pod="kube-system/coredns-66bc5c9577-h9w4d"
	Nov 23 08:37:49 pause-716098 kubelet[1296]: I1123 08:37:49.574446    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jm48\" (UniqueName: \"kubernetes.io/projected/6180f67a-cff6-4d6b-88c9-8f9f44293a04-kube-api-access-2jm48\") pod \"coredns-66bc5c9577-h9w4d\" (UID: \"6180f67a-cff6-4d6b-88c9-8f9f44293a04\") " pod="kube-system/coredns-66bc5c9577-h9w4d"
	Nov 23 08:37:50 pause-716098 kubelet[1296]: I1123 08:37:50.085878    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h9w4d" podStartSLOduration=43.08585766 podStartE2EDuration="43.08585766s" podCreationTimestamp="2025-11-23 08:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:37:50.085681915 +0000 UTC m=+48.230320129" watchObservedRunningTime="2025-11-23 08:37:50.08585766 +0000 UTC m=+48.230495864"
	Nov 23 08:37:53 pause-716098 kubelet[1296]: W1123 08:37:53.079757    1296 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 08:37:53 pause-716098 kubelet[1296]: E1123 08:37:53.079851    1296 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 23 08:37:53 pause-716098 kubelet[1296]: E1123 08:37:53.079899    1296 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 08:37:53 pause-716098 kubelet[1296]: E1123 08:37:53.079915    1296 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 08:37:53 pause-716098 kubelet[1296]: W1123 08:37:53.180240    1296 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 08:37:58 pause-716098 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:37:58 pause-716098 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:37:58 pause-716098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 08:37:58 pause-716098 systemd[1]: kubelet.service: Consumed 2.145s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-716098 -n pause-716098
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-716098 -n pause-716098: exit status 2 (378.637222ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-716098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-716098
helpers_test.go:243: (dbg) docker inspect pause-716098:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990",
	        "Created": "2025-11-23T08:36:43.197505702Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186494,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:36:43.242189862Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990/hostname",
	        "HostsPath": "/var/lib/docker/containers/880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990/hosts",
	        "LogPath": "/var/lib/docker/containers/880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990/880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990-json.log",
	        "Name": "/pause-716098",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-716098:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-716098",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "880cb9d38f960ebda235539449e02e1acc5f29fcf92a83de27aa477d96e3e990",
	                "LowerDir": "/var/lib/docker/overlay2/cfe2c8eba195bcd30e7e38bbd52d91928d4defd3a6f50b62decdca38c153a65e-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfe2c8eba195bcd30e7e38bbd52d91928d4defd3a6f50b62decdca38c153a65e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfe2c8eba195bcd30e7e38bbd52d91928d4defd3a6f50b62decdca38c153a65e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfe2c8eba195bcd30e7e38bbd52d91928d4defd3a6f50b62decdca38c153a65e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-716098",
	                "Source": "/var/lib/docker/volumes/pause-716098/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-716098",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-716098",
	                "name.minikube.sigs.k8s.io": "pause-716098",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "71922cdb62c7fe8ef51d5d9274663e3cf934d647289420ad4d9de05dc14b0adb",
	            "SandboxKey": "/var/run/docker/netns/71922cdb62c7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-716098": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95b2e60b45dbc2d26f29e43eb662f327a8c44de55025876ae25398b424d3bba1",
	                    "EndpointID": "dd61500d022b0bae608fd7dc575c6820a160eb7bfd2860e3f7d6ea863f03c3d2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "b2:84:b5:fc:76:97",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-716098",
	                        "880cb9d38f96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-716098 -n pause-716098
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-716098 -n pause-716098: exit status 2 (355.779979ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-716098 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-716098 logs -n 25: (2.738757864s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-351793 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                 │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo systemctl cat containerd --no-pager                                                                                                                                                                 │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                          │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo cat /etc/containerd/config.toml                                                                                                                                                                     │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo containerd config dump                                                                                                                                                                              │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo systemctl status crio --all --full --no-pager                                                                                                                                                       │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo systemctl cat crio --no-pager                                                                                                                                                                       │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                             │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ ssh     │ -p cilium-351793 sudo crio config                                                                                                                                                                                         │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ delete  │ -p cilium-351793                                                                                                                                                                                                          │ cilium-351793             │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ start   │ -p cert-options-795018 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-795018       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ delete  │ -p NoKubernetes-840508                                                                                                                                                                                                    │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ start   │ -p NoKubernetes-840508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                     │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ ssh     │ -p NoKubernetes-840508 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ stop    │ -p NoKubernetes-840508                                                                                                                                                                                                    │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ start   │ -p NoKubernetes-840508 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ start   │ -p pause-716098 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-716098              │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ ssh     │ -p NoKubernetes-840508 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ delete  │ -p NoKubernetes-840508                                                                                                                                                                                                    │ NoKubernetes-840508       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ ssh     │ cert-options-795018 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-795018       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ ssh     │ -p cert-options-795018 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-795018       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:37 UTC │
	│ pause   │ -p pause-716098 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-716098              │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ delete  │ -p cert-options-795018                                                                                                                                                                                                    │ cert-options-795018       │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │ 23 Nov 25 08:38 UTC │
	│ start   │ -p force-systemd-flag-170661 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-170661 │ jenkins │ v1.37.0 │ 23 Nov 25 08:37 UTC │                     │
	│ start   │ -p stopped-upgrade-430008 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-430008    │ jenkins │ v1.32.0 │ 23 Nov 25 08:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:38:03
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:38:03.151502  211553 out.go:296] Setting OutFile to fd 1 ...
	I1123 08:38:03.151724  211553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1123 08:38:03.151731  211553 out.go:309] Setting ErrFile to fd 2...
	I1123 08:38:03.151738  211553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1123 08:38:03.152006  211553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:38:03.152748  211553 out.go:303] Setting JSON to false
	I1123 08:38:03.154188  211553 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4830,"bootTime":1763882253,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:38:03.154262  211553 start.go:138] virtualization: kvm guest
	I1123 08:38:03.157228  211553 out.go:177] * [stopped-upgrade-430008] minikube v1.32.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:38:03.158498  211553 out.go:177]   - MINIKUBE_LOCATION=21966
	I1123 08:38:03.158533  211553 notify.go:220] Checking for updates...
	I1123 08:38:03.160063  211553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:38:03.161235  211553 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:38:03.162578  211553 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:38:03.163717  211553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:38:03.164785  211553 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig2042890572
	I1123 08:38:03.166459  211553 config.go:182] Loaded profile config "cert-expiration-747782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:38:03.166587  211553 config.go:182] Loaded profile config "force-systemd-flag-170661": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:38:03.166768  211553 config.go:182] Loaded profile config "pause-716098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:38:03.166870  211553 driver.go:378] Setting default libvirt URI to qemu:///system
	I1123 08:38:03.202967  211553 docker.go:122] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:38:03.203059  211553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:38:03.248587  211553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/last_update_check: {Name:mkb2d8d82a4b20db3410364beab9a20f7bbaba1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:38:03.250793  211553 out.go:177] * minikube 1.37.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.37.0
	I1123 08:38:03.252043  211553 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I1123 08:38:03.270298  211553 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:38:03.259852158 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:38:03.270418  211553 docker.go:295] overlay module found
	I1123 08:38:03.271777  211553 out.go:177] * Using the docker driver based on user configuration
	I1123 08:38:03.273285  211553 start.go:298] selected driver: docker
	I1123 08:38:03.273293  211553 start.go:902] validating driver "docker" against <nil>
	I1123 08:38:03.273303  211553 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:38:03.273848  211553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:38:03.330896  211553 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:38:03.321572112 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:38:03.331104  211553 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1123 08:38:03.331384  211553 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 08:38:03.332784  211553 out.go:177] * Using Docker driver with root privileges
	I1123 08:38:03.333813  211553 cni.go:84] Creating CNI manager for ""
	I1123 08:38:03.333824  211553 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:38:03.333833  211553 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:38:03.333852  211553 start_flags.go:323] config:
	{Name:stopped-upgrade-430008 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-430008 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1123 08:38:03.335006  211553 out.go:177] * Starting control plane node stopped-upgrade-430008 in cluster stopped-upgrade-430008
	I1123 08:38:03.335928  211553 cache.go:121] Beginning downloading kic base image for docker with crio
	I1123 08:38:03.336872  211553 out.go:177] * Pulling base image ...
	I1123 08:38:03.337761  211553 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1123 08:38:03.337854  211553 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1123 08:38:03.354597  211553 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1123 08:38:03.354817  211553 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1123 08:38:03.354851  211553 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1123 08:38:03.364461  211553 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1123 08:38:03.364473  211553 cache.go:56] Caching tarball of preloaded images
	I1123 08:38:03.364590  211553 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1123 08:38:03.365980  211553 out.go:177] * Downloading Kubernetes v1.28.3 preload ...
	I1123 08:37:58.689481  209809 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:37:58.689739  209809 start.go:159] libmachine.API.Create for "force-systemd-flag-170661" (driver="docker")
	I1123 08:37:58.689781  209809 client.go:173] LocalClient.Create starting
	I1123 08:37:58.689843  209809 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 08:37:58.689884  209809 main.go:143] libmachine: Decoding PEM data...
	I1123 08:37:58.689907  209809 main.go:143] libmachine: Parsing certificate...
	I1123 08:37:58.689959  209809 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 08:37:58.689990  209809 main.go:143] libmachine: Decoding PEM data...
	I1123 08:37:58.690013  209809 main.go:143] libmachine: Parsing certificate...
	I1123 08:37:58.690400  209809 cli_runner.go:164] Run: docker network inspect force-systemd-flag-170661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:37:58.706715  209809 cli_runner.go:211] docker network inspect force-systemd-flag-170661 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:37:58.706790  209809 network_create.go:284] running [docker network inspect force-systemd-flag-170661] to gather additional debugging logs...
	I1123 08:37:58.706809  209809 cli_runner.go:164] Run: docker network inspect force-systemd-flag-170661
	W1123 08:37:58.721941  209809 cli_runner.go:211] docker network inspect force-systemd-flag-170661 returned with exit code 1
	I1123 08:37:58.721968  209809 network_create.go:287] error running [docker network inspect force-systemd-flag-170661]: docker network inspect force-systemd-flag-170661: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-170661 not found
	I1123 08:37:58.721990  209809 network_create.go:289] output of [docker network inspect force-systemd-flag-170661]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-170661 not found
	
	** /stderr **
	I1123 08:37:58.722127  209809 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:37:58.739498  209809 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0e05b954e81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:02:f0:06:d5:34} reservation:<nil>}
	I1123 08:37:58.740169  209809 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2604e536ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:ab:00:4e:41:e6} reservation:<nil>}
	I1123 08:37:58.740911  209809 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce97320dd675 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:5a:a5:0b:c0:b0} reservation:<nil>}
	I1123 08:37:58.741467  209809 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-101fc7ebbfca IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:8b:69:ec:62:0a} reservation:<nil>}
	I1123 08:37:58.742234  209809 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee2250}
	I1123 08:37:58.742267  209809 network_create.go:124] attempt to create docker network force-systemd-flag-170661 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 08:37:58.742323  209809 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-170661 force-systemd-flag-170661
	I1123 08:37:58.794897  209809 network_create.go:108] docker network force-systemd-flag-170661 192.168.85.0/24 created
	I1123 08:37:58.794934  209809 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-170661" container
	I1123 08:37:58.795001  209809 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:37:58.814603  209809 cli_runner.go:164] Run: docker volume create force-systemd-flag-170661 --label name.minikube.sigs.k8s.io=force-systemd-flag-170661 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:37:58.832185  209809 oci.go:103] Successfully created a docker volume force-systemd-flag-170661
	I1123 08:37:58.832275  209809 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-170661-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-170661 --entrypoint /usr/bin/test -v force-systemd-flag-170661:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:37:59.211796  209809 oci.go:107] Successfully prepared a docker volume force-systemd-flag-170661
	I1123 08:37:59.211885  209809 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:37:59.211899  209809 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:37:59.211956  209809 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-170661:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:38:02.523130  209809 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-170661:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.311074957s)
	I1123 08:38:02.523249  209809 kic.go:203] duration metric: took 3.311341184s to extract preloaded images to volume ...
	W1123 08:38:02.523353  209809 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:38:02.523404  209809 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:38:02.523471  209809 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:38:02.583660  209809 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-170661 --name force-systemd-flag-170661 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-170661 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-170661 --network force-systemd-flag-170661 --ip 192.168.85.2 --volume force-systemd-flag-170661:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:38:02.928102  209809 cli_runner.go:164] Run: docker container inspect force-systemd-flag-170661 --format={{.State.Running}}
	I1123 08:38:02.951267  209809 cli_runner.go:164] Run: docker container inspect force-systemd-flag-170661 --format={{.State.Status}}
	I1123 08:38:02.973635  209809 cli_runner.go:164] Run: docker exec force-systemd-flag-170661 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:38:03.029961  209809 oci.go:144] the created container "force-systemd-flag-170661" has a running status.
	I1123 08:38:03.029991  209809 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/force-systemd-flag-170661/id_rsa...
	I1123 08:38:03.118492  209809 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/force-systemd-flag-170661/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1123 08:38:03.118537  209809 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/force-systemd-flag-170661/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:38:03.154011  209809 cli_runner.go:164] Run: docker container inspect force-systemd-flag-170661 --format={{.State.Status}}
	I1123 08:38:03.188304  209809 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:38:03.188326  209809 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-170661 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:38:03.257719  209809 cli_runner.go:164] Run: docker container inspect force-systemd-flag-170661 --format={{.State.Status}}
	I1123 08:38:03.277250  209809 machine.go:94] provisionDockerMachine start ...
	I1123 08:38:03.277321  209809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-170661
	I1123 08:38:03.296714  209809 main.go:143] libmachine: Using SSH client type: native
	I1123 08:38:03.297076  209809 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33016 <nil> <nil>}
	I1123 08:38:03.297096  209809 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:38:03.297755  209809 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49318->127.0.0.1:33016: read: connection reset by peer
	
	
	==> CRI-O <==
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.713930924Z" level=info msg="RDT not available in the host system"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.713943788Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.714810705Z" level=info msg="Conmon does support the --sync option"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.714831077Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.714847531Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.715758517Z" level=info msg="Conmon does support the --sync option"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.715778033Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.719855815Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.719875491Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.720338754Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.72071548Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.720765413Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.795994251Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-h9w4d Namespace:kube-system ID:2ea2822c9733ce277fbb7c43a7f6c85ec77e8cce9ea1137e5b49f1e0ea9eb162 UID:6180f67a-cff6-4d6b-88c9-8f9f44293a04 NetNS:/var/run/netns/0bccb5de-5512-46a3-8b21-d671f5c29745 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000892288}] Aliases:map[]}"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796145456Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-h9w4d for CNI network kindnet (type=ptp)"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796513703Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796534737Z" level=info msg="Starting seccomp notifier watcher"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796582264Z" level=info msg="Create NRI interface"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796672009Z" level=info msg="built-in NRI default validator is disabled"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796701394Z" level=info msg="runtime interface created"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796715222Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796722419Z" level=info msg="runtime interface starting up..."
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796729959Z" level=info msg="starting plugins..."
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796744115Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 23 08:37:54 pause-716098 crio[2183]: time="2025-11-23T08:37:54.796988541Z" level=info msg="No systemd watchdog enabled"
	Nov 23 08:37:54 pause-716098 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a52c570c00b3c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago       Running             coredns                   0                   2ea2822c9733c       coredns-66bc5c9577-h9w4d               kube-system
	3b91434da8e24       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   55 seconds ago       Running             kindnet-cni               0                   58152b038ad91       kindnet-t9qph                          kube-system
	3142ab901cf20       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   55 seconds ago       Running             kube-proxy                0                   dc8ebf704f566       kube-proxy-dm88x                       kube-system
	a0645edd38940       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   5d0581a890f76       kube-controller-manager-pause-716098   kube-system
	b1d6f513b9fd5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   a6e2b86d6640d       kube-apiserver-pause-716098            kube-system
	fa9a01040e3eb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   164efa411434c       etcd-pause-716098                      kube-system
	cd12fe4fa1b43       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   3761ebce7c6ba       kube-scheduler-pause-716098            kube-system
	
	
	==> coredns [a52c570c00b3c3ae086eca845946a180891d48e3d97464103b3987d988c25812] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39594 - 23494 "HINFO IN 4397778813889748474.6200094615082072776. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.058492342s
	
	
	==> describe nodes <==
	Name:               pause-716098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-716098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=pause-716098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_37_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:36:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-716098
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:37:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:37:52 +0000   Sun, 23 Nov 2025 08:36:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:37:52 +0000   Sun, 23 Nov 2025 08:36:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:37:52 +0000   Sun, 23 Nov 2025 08:36:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:37:52 +0000   Sun, 23 Nov 2025 08:37:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-716098
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                eb958d43-02ba-4af1-a7b0-45e8d97c885e
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-h9w4d                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     57s
	  kube-system                 etcd-pause-716098                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         62s
	  kube-system                 kindnet-t9qph                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-pause-716098             250m (3%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-pause-716098    200m (2%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-dm88x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-pause-716098             100m (1%)     0 (0%)      0 (0%)           0 (0%)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 55s   kube-proxy       
	  Normal  Starting                 63s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s   kubelet          Node pause-716098 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s   kubelet          Node pause-716098 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s   kubelet          Node pause-716098 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s   node-controller  Node pause-716098 event: Registered Node pause-716098 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-716098 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.079858] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024030] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.151122] kauditd_printk_skb: 47 callbacks suppressed
	[Nov23 07:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.034290] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023878] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023870] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +2.047767] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +4.031598] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[  +8.127154] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[ +16.382339] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	[Nov23 07:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 56 4d 29 70 e0 1b 26 a7 d7 6d 31 bb 08 00
	
	
	==> etcd [fa9a01040e3eb099588692c065781ee7e96ce362f87dac90c505e830351cf439] <==
	{"level":"info","ts":"2025-11-23T08:37:07.891000Z","caller":"traceutil/trace.go:172","msg":"trace[639521649] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"315.890492ms","start":"2025-11-23T08:37:07.575094Z","end":"2025-11-23T08:37:07.890985Z","steps":["trace[639521649] 'process raft request'  (duration: 153.65399ms)","trace[639521649] 'compare'  (duration: 161.95955ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:37:07.891039Z","caller":"traceutil/trace.go:172","msg":"trace[914348219] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"270.258921ms","start":"2025-11-23T08:37:07.620775Z","end":"2025-11-23T08:37:07.891034Z","steps":["trace[914348219] 'process raft request'  (duration: 270.177857ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:37:07.891033Z","caller":"traceutil/trace.go:172","msg":"trace[821324252] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"268.806865ms","start":"2025-11-23T08:37:07.622217Z","end":"2025-11-23T08:37:07.891024Z","steps":["trace[821324252] 'process raft request'  (duration: 268.765601ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:37:07.891041Z","caller":"traceutil/trace.go:172","msg":"trace[1846471931] linearizableReadLoop","detail":"{readStateIndex:345; appliedIndex:343; }","duration":"162.231598ms","start":"2025-11-23T08:37:07.728794Z","end":"2025-11-23T08:37:07.891025Z","steps":["trace[1846471931] 'read index received'  (duration: 138.086219ms)","trace[1846471931] 'applied index is now lower than readState.Index'  (duration: 24.143689ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:37:07.891069Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:37:07.575083Z","time spent":"315.956052ms","remote":"127.0.0.1:54106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":657,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kindnet.187a95e7f5407483\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet.187a95e7f5407483\" value_size:586 lease:4650418170336298560 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T08:37:07.891015Z","caller":"traceutil/trace.go:172","msg":"trace[347522216] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"315.394004ms","start":"2025-11-23T08:37:07.575608Z","end":"2025-11-23T08:37:07.891002Z","steps":["trace[347522216] 'process raft request'  (duration: 315.300414ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:37:07.891184Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T08:37:07.575602Z","time spent":"315.563555ms","remote":"127.0.0.1:54106","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":691,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-dm88x.187a95e7f67b08f1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-dm88x.187a95e7f67b08f1\" value_size:611 lease:4650418170336298560 >> failure:<>"}
	{"level":"warn","ts":"2025-11-23T08:37:07.891202Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"230.341918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-23T08:37:07.891232Z","caller":"traceutil/trace.go:172","msg":"trace[1289611047] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:336; }","duration":"230.377476ms","start":"2025-11-23T08:37:07.660846Z","end":"2025-11-23T08:37:07.891223Z","steps":["trace[1289611047] 'agreement among raft nodes before linearized reading'  (duration: 230.2686ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:37:08.029237Z","caller":"traceutil/trace.go:172","msg":"trace[880256516] linearizableReadLoop","detail":"{readStateIndex:349; appliedIndex:349; }","duration":"110.179076ms","start":"2025-11-23T08:37:07.919035Z","end":"2025-11-23T08:37:08.029215Z","steps":["trace[880256516] 'read index received'  (duration: 110.17209ms)","trace[880256516] 'applied index is now lower than readState.Index'  (duration: 5.919µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:37:08.186886Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"267.829488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-t9qph\" limit:1 ","response":"range_response_count:1 size:3692"}
	{"level":"info","ts":"2025-11-23T08:37:08.186948Z","caller":"traceutil/trace.go:172","msg":"trace[1963481777] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-t9qph; range_end:; response_count:1; response_revision:338; }","duration":"267.903731ms","start":"2025-11-23T08:37:07.919030Z","end":"2025-11-23T08:37:08.186934Z","steps":["trace[1963481777] 'agreement among raft nodes before linearized reading'  (duration: 110.373761ms)","trace[1963481777] 'range keys from in-memory index tree'  (duration: 157.356103ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:37:08.187370Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.907894ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790207191074649 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:336 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4235 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:37:08.187532Z","caller":"traceutil/trace.go:172","msg":"trace[1947669207] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"267.022253ms","start":"2025-11-23T08:37:07.920499Z","end":"2025-11-23T08:37:08.187521Z","steps":["trace[1947669207] 'process raft request'  (duration: 266.965375ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:37:08.187540Z","caller":"traceutil/trace.go:172","msg":"trace[402077253] linearizableReadLoop","detail":"{readStateIndex:350; appliedIndex:349; }","duration":"158.080996ms","start":"2025-11-23T08:37:08.029448Z","end":"2025-11-23T08:37:08.187529Z","steps":["trace[402077253] 'read index received'  (duration: 155.363482ms)","trace[402077253] 'applied index is now lower than readState.Index'  (duration: 2.716716ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:37:08.187619Z","caller":"traceutil/trace.go:172","msg":"trace[1082486262] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"272.12721ms","start":"2025-11-23T08:37:07.915480Z","end":"2025-11-23T08:37:08.187607Z","steps":["trace[1082486262] 'process raft request'  (duration: 113.916701ms)","trace[1082486262] 'compare'  (duration: 157.709086ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:37:08.187776Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"209.913128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-11-23T08:37:08.187944Z","caller":"traceutil/trace.go:172","msg":"trace[1541705457] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:340; }","duration":"210.086281ms","start":"2025-11-23T08:37:07.977849Z","end":"2025-11-23T08:37:08.187935Z","steps":["trace[1541705457] 'agreement among raft nodes before linearized reading'  (duration: 209.820428ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:37:08.187862Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.945784ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2025-11-23T08:37:08.188050Z","caller":"traceutil/trace.go:172","msg":"trace[144208957] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:340; }","duration":"115.133083ms","start":"2025-11-23T08:37:08.072907Z","end":"2025-11-23T08:37:08.188040Z","steps":["trace[144208957] 'agreement among raft nodes before linearized reading'  (duration: 114.894036ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:37:08.187865Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.878341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-11-23T08:37:08.188134Z","caller":"traceutil/trace.go:172","msg":"trace[1461514322] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:340; }","duration":"181.148319ms","start":"2025-11-23T08:37:08.006975Z","end":"2025-11-23T08:37:08.188123Z","steps":["trace[1461514322] 'agreement among raft nodes before linearized reading'  (duration: 180.817993ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:38:01.146870Z","caller":"traceutil/trace.go:172","msg":"trace[1156819154] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"152.101566ms","start":"2025-11-23T08:38:00.994754Z","end":"2025-11-23T08:38:01.146855Z","steps":["trace[1156819154] 'process raft request'  (duration: 130.1779ms)","trace[1156819154] 'compare'  (duration: 21.847669ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:38:02.022919Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.966106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:38:02.022987Z","caller":"traceutil/trace.go:172","msg":"trace[1331416871] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:415; }","duration":"164.068197ms","start":"2025-11-23T08:38:01.858903Z","end":"2025-11-23T08:38:02.022971Z","steps":["trace[1331416871] 'range keys from in-memory index tree'  (duration: 163.89496ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:38:04 up  1:20,  0 user,  load average: 2.55, 1.76, 1.26
	Linux pause-716098 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3b91434da8e2424aa2abc9c2b29c3bff72cd58d0ef5c628934c46e0bb7ee4f74] <==
	I1123 08:37:08.796012       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:37:08.796256       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:37:08.796384       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:37:08.796405       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:37:08.796416       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:37:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:37:08.996502       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:37:08.996535       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:37:08.996548       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:37:08.996671       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:37:38.997679       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:37:38.997704       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:37:38.997728       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:37:38.997731       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 08:37:40.197563       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:37:40.197589       1 metrics.go:72] Registering metrics
	I1123 08:37:40.197667       1 controller.go:711] "Syncing nftables rules"
	I1123 08:37:49.002963       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:37:49.003013       1 main.go:301] handling current node
	I1123 08:37:59.000762       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:37:59.000793       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b1d6f513b9fd574eaf72e9455cc6b87b3204d19841888239ce70a30b092c7a8f] <==
	I1123 08:36:59.571471       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:36:59.575280       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 08:36:59.575315       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 08:36:59.589282       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:36:59.593637       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:36:59.602525       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:36:59.604287       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:36:59.745589       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:37:00.373455       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:37:00.377454       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:37:00.377472       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:37:00.839670       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:37:00.869160       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:37:00.978389       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:37:00.984027       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 08:37:00.984919       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:37:00.988612       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:37:01.432858       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:37:02.082726       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:37:02.092464       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:37:02.100159       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:37:06.971272       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:37:07.191927       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:37:07.431255       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:37:07.557000       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a0645edd38940accf406db609c29284d37f09d717f50532bc8d93333044d21af] <==
	I1123 08:37:06.433525       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:37:06.433531       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:37:06.434542       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:37:06.435715       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:37:06.453197       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:37:06.459341       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:37:06.460553       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:37:06.464728       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:37:06.465789       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:37:06.472037       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:37:06.475234       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:37:06.477453       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:37:06.479708       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:37:06.479727       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:37:06.479800       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:37:06.479904       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-716098"
	I1123 08:37:06.479979       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:37:06.480184       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:37:06.480201       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:37:06.480208       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:37:06.480301       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:37:06.483134       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:37:06.494594       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:37:06.599031       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-716098" podCIDRs=["10.244.0.0/24"]
	I1123 08:37:51.487068       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3142ab901cf20766a2af10228926409fa9b71496197851ff1d4bbe355dc29f0e] <==
	I1123 08:37:08.614185       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:37:08.675272       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:37:08.777153       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:37:08.777255       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 08:37:08.777379       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:37:08.801432       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:37:08.801476       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:37:08.808624       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:37:08.809045       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:37:08.809337       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:37:08.810995       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:37:08.811496       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:37:08.811626       1 config.go:200] "Starting service config controller"
	I1123 08:37:08.811663       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:37:08.811656       1 config.go:309] "Starting node config controller"
	I1123 08:37:08.811934       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:37:08.811950       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:37:08.811813       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:37:08.811958       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:37:08.912029       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:37:08.912043       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:37:08.912084       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [cd12fe4fa1b43a5e3f17760eec386c2a2ea817d3934efdc29f8514942ee61362] <==
	E1123 08:36:59.511376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:36:59.515320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:36:59.515572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:36:59.515647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:36:59.515739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:36:59.516922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:36:59.519285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:36:59.519433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:36:59.519827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:36:59.519921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:36:59.520003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:36:59.520058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:36:59.520134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:36:59.520195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:36:59.520251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:36:59.520310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:36:59.520377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:36:59.520437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:36:59.520781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:37:00.397911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:37:00.617353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:37:00.636345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:37:00.657847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:37:00.685614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1123 08:37:01.007076       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:37:06 pause-716098 kubelet[1296]: I1123 08:37:06.694260    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:37:06 pause-716098 kubelet[1296]: I1123 08:37:06.694970    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.869207    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6157e6eb-3223-4d9d-a075-bcff09fb2266-kube-proxy\") pod \"kube-proxy-dm88x\" (UID: \"6157e6eb-3223-4d9d-a075-bcff09fb2266\") " pod="kube-system/kube-proxy-dm88x"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.869257    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6157e6eb-3223-4d9d-a075-bcff09fb2266-xtables-lock\") pod \"kube-proxy-dm88x\" (UID: \"6157e6eb-3223-4d9d-a075-bcff09fb2266\") " pod="kube-system/kube-proxy-dm88x"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.869283    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6157e6eb-3223-4d9d-a075-bcff09fb2266-lib-modules\") pod \"kube-proxy-dm88x\" (UID: \"6157e6eb-3223-4d9d-a075-bcff09fb2266\") " pod="kube-system/kube-proxy-dm88x"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.869315    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bc2\" (UniqueName: \"kubernetes.io/projected/6157e6eb-3223-4d9d-a075-bcff09fb2266-kube-api-access-v4bc2\") pod \"kube-proxy-dm88x\" (UID: \"6157e6eb-3223-4d9d-a075-bcff09fb2266\") " pod="kube-system/kube-proxy-dm88x"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.970038    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvq4d\" (UniqueName: \"kubernetes.io/projected/2bf21882-3e20-4791-8817-830f3ed23c83-kube-api-access-gvq4d\") pod \"kindnet-t9qph\" (UID: \"2bf21882-3e20-4791-8817-830f3ed23c83\") " pod="kube-system/kindnet-t9qph"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.970098    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bf21882-3e20-4791-8817-830f3ed23c83-xtables-lock\") pod \"kindnet-t9qph\" (UID: \"2bf21882-3e20-4791-8817-830f3ed23c83\") " pod="kube-system/kindnet-t9qph"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.970131    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bf21882-3e20-4791-8817-830f3ed23c83-lib-modules\") pod \"kindnet-t9qph\" (UID: \"2bf21882-3e20-4791-8817-830f3ed23c83\") " pod="kube-system/kindnet-t9qph"
	Nov 23 08:37:07 pause-716098 kubelet[1296]: I1123 08:37:07.970171    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2bf21882-3e20-4791-8817-830f3ed23c83-cni-cfg\") pod \"kindnet-t9qph\" (UID: \"2bf21882-3e20-4791-8817-830f3ed23c83\") " pod="kube-system/kindnet-t9qph"
	Nov 23 08:37:09 pause-716098 kubelet[1296]: I1123 08:37:09.010868    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-t9qph" podStartSLOduration=2.010841389 podStartE2EDuration="2.010841389s" podCreationTimestamp="2025-11-23 08:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:37:08.999331938 +0000 UTC m=+7.143970145" watchObservedRunningTime="2025-11-23 08:37:09.010841389 +0000 UTC m=+7.155479593"
	Nov 23 08:37:09 pause-716098 kubelet[1296]: I1123 08:37:09.984835    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dm88x" podStartSLOduration=2.984816677 podStartE2EDuration="2.984816677s" podCreationTimestamp="2025-11-23 08:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:37:09.025122188 +0000 UTC m=+7.169760392" watchObservedRunningTime="2025-11-23 08:37:09.984816677 +0000 UTC m=+8.129454875"
	Nov 23 08:37:49 pause-716098 kubelet[1296]: I1123 08:37:49.437772    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:37:49 pause-716098 kubelet[1296]: I1123 08:37:49.574400    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6180f67a-cff6-4d6b-88c9-8f9f44293a04-config-volume\") pod \"coredns-66bc5c9577-h9w4d\" (UID: \"6180f67a-cff6-4d6b-88c9-8f9f44293a04\") " pod="kube-system/coredns-66bc5c9577-h9w4d"
	Nov 23 08:37:49 pause-716098 kubelet[1296]: I1123 08:37:49.574446    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jm48\" (UniqueName: \"kubernetes.io/projected/6180f67a-cff6-4d6b-88c9-8f9f44293a04-kube-api-access-2jm48\") pod \"coredns-66bc5c9577-h9w4d\" (UID: \"6180f67a-cff6-4d6b-88c9-8f9f44293a04\") " pod="kube-system/coredns-66bc5c9577-h9w4d"
	Nov 23 08:37:50 pause-716098 kubelet[1296]: I1123 08:37:50.085878    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h9w4d" podStartSLOduration=43.08585766 podStartE2EDuration="43.08585766s" podCreationTimestamp="2025-11-23 08:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:37:50.085681915 +0000 UTC m=+48.230320129" watchObservedRunningTime="2025-11-23 08:37:50.08585766 +0000 UTC m=+48.230495864"
	Nov 23 08:37:53 pause-716098 kubelet[1296]: W1123 08:37:53.079757    1296 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 08:37:53 pause-716098 kubelet[1296]: E1123 08:37:53.079851    1296 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 23 08:37:53 pause-716098 kubelet[1296]: E1123 08:37:53.079899    1296 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 08:37:53 pause-716098 kubelet[1296]: E1123 08:37:53.079915    1296 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 23 08:37:53 pause-716098 kubelet[1296]: W1123 08:37:53.180240    1296 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 23 08:37:58 pause-716098 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:37:58 pause-716098 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:37:58 pause-716098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 08:37:58 pause-716098 systemd[1]: kubelet.service: Consumed 2.145s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-716098 -n pause-716098
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-716098 -n pause-716098: exit status 2 (355.307944ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-716098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-057894 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-057894 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (504.726474ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:43:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-057894 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-057894 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-057894 describe deploy/metrics-server -n kube-system: exit status 1 (89.750147ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-057894 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-057894
helpers_test.go:243: (dbg) docker inspect old-k8s-version-057894:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007",
	        "Created": "2025-11-23T08:42:58.872833839Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 289915,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:42:58.932763Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/hostname",
	        "HostsPath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/hosts",
	        "LogPath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007-json.log",
	        "Name": "/old-k8s-version-057894",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-057894:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-057894",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007",
	                "LowerDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-057894",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-057894/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-057894",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-057894",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-057894",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a6f932514fe7d63033688092438b87770a09996f1713613695ab0c967e0a604e",
	            "SandboxKey": "/var/run/docker/netns/a6f932514fe7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-057894": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c80b7bca17a7fb714fa079981c2a6d3c533cb55d656f0653a2df50f0ca949782",
	                    "EndpointID": "1ef193a001d19af393738b17189c4e4a100d8f9a9c2c24518d366c343eddb0df",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "32:59:de:c5:d9:4e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-057894",
	                        "521ae9646520"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-057894 -n old-k8s-version-057894
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-057894 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-057894 logs -n 25: (1.243997419s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-351793 sudo ip r s                                                   │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo iptables-save                                            │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo iptables -t nat -L -n -v                                 │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status kubelet --all --full --no-pager         │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl cat kubelet --no-pager                         │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status docker --all --full --no-pager          │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl cat docker --no-pager                          │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/docker/daemon.json                              │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo docker system info                                       │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl cat cri-docker --no-pager                      │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cri-dockerd --version                                    │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status containerd --all --full --no-pager      │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl cat containerd --no-pager                      │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /lib/systemd/system/containerd.service               │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/containerd/config.toml                          │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo containerd config dump                                   │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status crio --all --full --no-pager            │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl cat crio --no-pager                            │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-351793 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:43:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:43:33.246440  301517 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:43:33.246716  301517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:33.246728  301517 out.go:374] Setting ErrFile to fd 2...
	I1123 08:43:33.246733  301517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:43:33.246935  301517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:43:33.247358  301517 out.go:368] Setting JSON to false
	I1123 08:43:33.248510  301517 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5160,"bootTime":1763882253,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:43:33.248565  301517 start.go:143] virtualization: kvm guest
	I1123 08:43:33.250193  301517 out.go:179] * [default-k8s-diff-port-726261] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:43:33.251223  301517 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:43:33.251215  301517 notify.go:221] Checking for updates...
	I1123 08:43:33.252411  301517 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:43:33.253572  301517 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:43:33.254577  301517 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:43:33.255615  301517 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:43:33.256549  301517 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:43:33.258233  301517 config.go:182] Loaded profile config "bridge-351793": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:43:33.258336  301517 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:43:33.258421  301517 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:43:33.258498  301517 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:43:33.282217  301517 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:43:33.282319  301517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:43:33.338764  301517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-23 08:43:33.328937213 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:43:33.338900  301517 docker.go:319] overlay module found
	I1123 08:43:33.341125  301517 out.go:179] * Using the docker driver based on user configuration
	I1123 08:43:33.342137  301517 start.go:309] selected driver: docker
	I1123 08:43:33.342151  301517 start.go:927] validating driver "docker" against <nil>
	I1123 08:43:33.342165  301517 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:43:33.342664  301517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:43:33.396519  301517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-23 08:43:33.387171696 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:43:33.396674  301517 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:43:33.396948  301517 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:43:33.398298  301517 out.go:179] * Using Docker driver with root privileges
	I1123 08:43:33.399236  301517 cni.go:84] Creating CNI manager for ""
	I1123 08:43:33.399303  301517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:43:33.399315  301517 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:43:33.399375  301517 start.go:353] cluster config:
	{Name:default-k8s-diff-port-726261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-726261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:43:33.400496  301517 out.go:179] * Starting "default-k8s-diff-port-726261" primary control-plane node in "default-k8s-diff-port-726261" cluster
	I1123 08:43:33.401432  301517 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:43:33.402458  301517 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:43:33.403532  301517 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:43:33.403558  301517 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:43:33.403565  301517 cache.go:65] Caching tarball of preloaded images
	I1123 08:43:33.403628  301517 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:43:33.403622  301517 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:43:33.403638  301517 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:43:33.403721  301517 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/config.json ...
	I1123 08:43:33.403738  301517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/config.json: {Name:mkfc8ddbaf9b536511d28b718802ae7794e82c18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:33.423921  301517 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:43:33.423943  301517 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:43:33.423956  301517 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:43:33.423985  301517 start.go:360] acquireMachinesLock for default-k8s-diff-port-726261: {Name:mk20f43ab6ba07638ede58293a3ae7a0dcd304cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:43:33.424075  301517 start.go:364] duration metric: took 71.945µs to acquireMachinesLock for "default-k8s-diff-port-726261"
	I1123 08:43:33.424103  301517 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-726261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-726261 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:43:33.424170  301517 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:43:29.188146  299523 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:43:29.188374  299523 start.go:159] libmachine.API.Create for "no-preload-187607" (driver="docker")
	I1123 08:43:29.188400  299523 client.go:173] LocalClient.Create starting
	I1123 08:43:29.188455  299523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 08:43:29.188486  299523 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:29.188512  299523 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:29.188566  299523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 08:43:29.188589  299523 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:29.188602  299523 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:29.189081  299523 cli_runner.go:164] Run: docker network inspect no-preload-187607 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:43:29.220674  299523 cli_runner.go:211] docker network inspect no-preload-187607 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:43:29.221180  299523 network_create.go:284] running [docker network inspect no-preload-187607] to gather additional debugging logs...
	I1123 08:43:29.221204  299523 cli_runner.go:164] Run: docker network inspect no-preload-187607
	W1123 08:43:29.264129  299523 cli_runner.go:211] docker network inspect no-preload-187607 returned with exit code 1
	I1123 08:43:29.264180  299523 network_create.go:287] error running [docker network inspect no-preload-187607]: docker network inspect no-preload-187607: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-187607 not found
	I1123 08:43:29.264206  299523 network_create.go:289] output of [docker network inspect no-preload-187607]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-187607 not found
	
	** /stderr **
	I1123 08:43:29.264320  299523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:29.305352  299523 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0e05b954e81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:02:f0:06:d5:34} reservation:<nil>}
	I1123 08:43:29.306953  299523 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2604e536ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:ab:00:4e:41:e6} reservation:<nil>}
	I1123 08:43:29.308246  299523 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce97320dd675 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:5a:a5:0b:c0:b0} reservation:<nil>}
	I1123 08:43:29.309382  299523 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c80b7bca17a7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:f1:41:59:09:b5} reservation:<nil>}
	I1123 08:43:29.310180  299523 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-2a68820c528f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:8e:23:15:d4:81:ca} reservation:<nil>}
	I1123 08:43:29.310791  299523 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:43:29.312370  299523 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:43:29.312975  299523 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ff9c00}
	I1123 08:43:29.313008  299523 network_create.go:124] attempt to create docker network no-preload-187607 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1123 08:43:29.313073  299523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-187607 no-preload-187607
	I1123 08:43:29.323176  299523 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:43:29.325705  299523 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:43:29.327908  299523 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1123 08:43:29.341667  299523 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:43:29.369572  299523 cache.go:162] opening:  /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:43:29.392304  299523 network_create.go:108] docker network no-preload-187607 192.168.94.0/24 created
	I1123 08:43:29.392335  299523 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-187607" container
	I1123 08:43:29.392408  299523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:43:29.404846  299523 cache.go:157] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1123 08:43:29.405152  299523 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 254.387685ms
	I1123 08:43:29.405180  299523 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 08:43:29.421699  299523 cli_runner.go:164] Run: docker volume create no-preload-187607 --label name.minikube.sigs.k8s.io=no-preload-187607 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:43:29.454766  299523 oci.go:103] Successfully created a docker volume no-preload-187607
	I1123 08:43:29.454841  299523 cli_runner.go:164] Run: docker run --rm --name no-preload-187607-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-187607 --entrypoint /usr/bin/test -v no-preload-187607:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:43:29.751927  299523 cache.go:157] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 08:43:29.751962  299523 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 601.60655ms
	I1123 08:43:29.751982  299523 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 08:43:30.774649  299523 cli_runner.go:217] Completed: docker run --rm --name no-preload-187607-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-187607 --entrypoint /usr/bin/test -v no-preload-187607:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (1.319762833s)
	I1123 08:43:30.774784  299523 oci.go:107] Successfully prepared a docker volume no-preload-187607
	I1123 08:43:30.774845  299523 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1123 08:43:30.774964  299523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:43:30.775017  299523 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:43:30.775083  299523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:43:30.777710  299523 cache.go:157] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 08:43:30.777744  299523 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.627060606s
	I1123 08:43:30.777762  299523 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 08:43:30.791385  299523 cache.go:157] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 08:43:30.791422  299523 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.638195888s
	I1123 08:43:30.791438  299523 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 08:43:30.864173  299523 cache.go:157] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 08:43:30.864213  299523 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.713484536s
	I1123 08:43:30.864226  299523 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 08:43:30.874402  299523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-187607 --name no-preload-187607 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-187607 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-187607 --network no-preload-187607 --ip 192.168.94.2 --volume no-preload-187607:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:43:31.058890  299523 cache.go:157] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 08:43:31.061036  299523 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.910616613s
	I1123 08:43:31.061061  299523 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 08:43:31.356748  299523 cache.go:157] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 08:43:31.356775  299523 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.204618522s
	I1123 08:43:31.356786  299523 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 08:43:31.356807  299523 cache.go:87] Successfully saved all images to host disk.
	I1123 08:43:31.608226  299523 cli_runner.go:164] Run: docker container inspect no-preload-187607 --format={{.State.Running}}
	I1123 08:43:31.628354  299523 cli_runner.go:164] Run: docker container inspect no-preload-187607 --format={{.State.Status}}
	I1123 08:43:31.647108  299523 cli_runner.go:164] Run: docker exec no-preload-187607 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:43:31.696034  299523 oci.go:144] the created container "no-preload-187607" has a running status.
	I1123 08:43:31.696063  299523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/no-preload-187607/id_rsa...
	I1123 08:43:31.849472  299523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/no-preload-187607/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:43:31.873211  299523 cli_runner.go:164] Run: docker container inspect no-preload-187607 --format={{.State.Status}}
	I1123 08:43:31.890321  299523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:43:31.890340  299523 kic_runner.go:114] Args: [docker exec --privileged no-preload-187607 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:43:31.934893  299523 cli_runner.go:164] Run: docker container inspect no-preload-187607 --format={{.State.Status}}
	I1123 08:43:31.951617  299523 machine.go:94] provisionDockerMachine start ...
	I1123 08:43:31.951721  299523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:43:31.967766  299523 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:31.968071  299523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1123 08:43:31.968088  299523 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:43:31.968671  299523 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60450->127.0.0.1:33096: read: connection reset by peer
	W1123 08:43:34.891312  288790 node_ready.go:57] node "old-k8s-version-057894" has "Ready":"False" status (will retry)
	W1123 08:43:37.390677  288790 node_ready.go:57] node "old-k8s-version-057894" has "Ready":"False" status (will retry)
	I1123 08:43:33.425645  301517 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:43:33.425860  301517 start.go:159] libmachine.API.Create for "default-k8s-diff-port-726261" (driver="docker")
	I1123 08:43:33.425892  301517 client.go:173] LocalClient.Create starting
	I1123 08:43:33.425948  301517 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 08:43:33.425982  301517 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:33.426000  301517 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:33.426042  301517 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 08:43:33.426065  301517 main.go:143] libmachine: Decoding PEM data...
	I1123 08:43:33.426077  301517 main.go:143] libmachine: Parsing certificate...
	I1123 08:43:33.426367  301517 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-726261 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:43:33.442499  301517 cli_runner.go:211] docker network inspect default-k8s-diff-port-726261 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:43:33.442564  301517 network_create.go:284] running [docker network inspect default-k8s-diff-port-726261] to gather additional debugging logs...
	I1123 08:43:33.442592  301517 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-726261
	W1123 08:43:33.458039  301517 cli_runner.go:211] docker network inspect default-k8s-diff-port-726261 returned with exit code 1
	I1123 08:43:33.458065  301517 network_create.go:287] error running [docker network inspect default-k8s-diff-port-726261]: docker network inspect default-k8s-diff-port-726261: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-726261 not found
	I1123 08:43:33.458082  301517 network_create.go:289] output of [docker network inspect default-k8s-diff-port-726261]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-726261 not found
	
	** /stderr **
	I1123 08:43:33.458172  301517 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:33.475211  301517 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0e05b954e81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:02:f0:06:d5:34} reservation:<nil>}
	I1123 08:43:33.475976  301517 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2604e536ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:ab:00:4e:41:e6} reservation:<nil>}
	I1123 08:43:33.476762  301517 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce97320dd675 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:5a:a5:0b:c0:b0} reservation:<nil>}
	I1123 08:43:33.477392  301517 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c80b7bca17a7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:f1:41:59:09:b5} reservation:<nil>}
	I1123 08:43:33.478229  301517 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e35860}
	I1123 08:43:33.478250  301517 network_create.go:124] attempt to create docker network default-k8s-diff-port-726261 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 08:43:33.478300  301517 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-726261 default-k8s-diff-port-726261
	I1123 08:43:33.523030  301517 network_create.go:108] docker network default-k8s-diff-port-726261 192.168.85.0/24 created
	I1123 08:43:33.523059  301517 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-726261" container
	I1123 08:43:33.523116  301517 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:43:33.539793  301517 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-726261 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-726261 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:43:33.557448  301517 oci.go:103] Successfully created a docker volume default-k8s-diff-port-726261
	I1123 08:43:33.557524  301517 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-726261-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-726261 --entrypoint /usr/bin/test -v default-k8s-diff-port-726261:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:43:33.939863  301517 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-726261
	I1123 08:43:33.939939  301517 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:43:33.939955  301517 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:43:33.940020  301517 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-726261:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:43:35.117933  299523 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-187607
	
	I1123 08:43:35.117963  299523 ubuntu.go:182] provisioning hostname "no-preload-187607"
	I1123 08:43:35.118029  299523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:43:35.136410  299523 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:35.136619  299523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1123 08:43:35.136633  299523 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-187607 && echo "no-preload-187607" | sudo tee /etc/hostname
	I1123 08:43:35.289165  299523 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-187607
	
	I1123 08:43:35.289246  299523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:43:35.308705  299523 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:35.308910  299523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1123 08:43:35.308926  299523 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-187607' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-187607/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-187607' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:43:35.451143  299523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:43:35.451170  299523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:43:35.451188  299523 ubuntu.go:190] setting up certificates
	I1123 08:43:35.451200  299523 provision.go:84] configureAuth start
	I1123 08:43:35.451263  299523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-187607
	I1123 08:43:35.470065  299523 provision.go:143] copyHostCerts
	I1123 08:43:35.470127  299523 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:43:35.470138  299523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:43:35.470217  299523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:43:35.470334  299523 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:43:35.470348  299523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:43:35.470400  299523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:43:35.470472  299523 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:43:35.470480  299523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:43:35.470505  299523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:43:35.470556  299523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.no-preload-187607 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-187607]
	I1123 08:43:35.583090  299523 provision.go:177] copyRemoteCerts
	I1123 08:43:35.583154  299523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:43:35.583200  299523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:43:35.601398  299523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/no-preload-187607/id_rsa Username:docker}
	I1123 08:43:35.702820  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:43:35.728193  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:43:35.744737  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:43:35.762537  299523 provision.go:87] duration metric: took 311.321062ms to configureAuth
	I1123 08:43:35.762563  299523 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:43:35.762743  299523 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:43:35.762857  299523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:43:35.781540  299523 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:35.781819  299523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1123 08:43:35.781842  299523 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:43:36.083899  299523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:43:36.083925  299523 machine.go:97] duration metric: took 4.132289547s to provisionDockerMachine
	I1123 08:43:36.083934  299523 client.go:176] duration metric: took 6.895528849s to LocalClient.Create
	I1123 08:43:36.083955  299523 start.go:167] duration metric: took 6.895583808s to libmachine.API.Create "no-preload-187607"
	I1123 08:43:36.083965  299523 start.go:293] postStartSetup for "no-preload-187607" (driver="docker")
	I1123 08:43:36.083974  299523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:43:36.084029  299523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:43:36.084066  299523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:43:36.101662  299523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/no-preload-187607/id_rsa Username:docker}
	I1123 08:43:36.224948  299523 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:43:36.228282  299523 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:43:36.228309  299523 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:43:36.228319  299523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:43:36.228370  299523 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:43:36.228451  299523 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:43:36.228563  299523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:43:36.235712  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:43:36.334781  299523 start.go:296] duration metric: took 250.803598ms for postStartSetup
	I1123 08:43:36.392212  299523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-187607
	I1123 08:43:36.409602  299523 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/config.json ...
	I1123 08:43:36.454889  299523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:43:36.454943  299523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:43:36.471753  299523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/no-preload-187607/id_rsa Username:docker}
	I1123 08:43:36.569167  299523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:43:36.573563  299523 start.go:128] duration metric: took 7.388288376s to createHost
	I1123 08:43:36.573588  299523 start.go:83] releasing machines lock for "no-preload-187607", held for 7.388476463s
	I1123 08:43:36.573651  299523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-187607
	I1123 08:43:36.591110  299523 ssh_runner.go:195] Run: cat /version.json
	I1123 08:43:36.591153  299523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:43:36.591195  299523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:43:36.591272  299523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:43:36.610447  299523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/no-preload-187607/id_rsa Username:docker}
	I1123 08:43:36.611501  299523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/no-preload-187607/id_rsa Username:docker}
	I1123 08:43:36.708971  299523 ssh_runner.go:195] Run: systemctl --version
	I1123 08:43:36.761623  299523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:43:36.794054  299523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:43:36.798366  299523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:43:36.798425  299523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:43:37.133329  299523 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:43:37.133352  299523 start.go:496] detecting cgroup driver to use...
	I1123 08:43:37.133386  299523 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:43:37.133434  299523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:43:37.149645  299523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:43:37.161964  299523 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:43:37.162020  299523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:43:37.177828  299523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:43:37.194580  299523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:43:37.281232  299523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:43:37.457232  299523 docker.go:234] disabling docker service ...
	I1123 08:43:37.457286  299523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:43:37.474925  299523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:43:37.486793  299523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:43:37.582219  299523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:43:37.664252  299523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:43:37.676196  299523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:43:37.689696  299523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:43:37.689741  299523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:37.716977  299523 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:43:37.717033  299523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:37.846660  299523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:37.857047  299523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:37.966092  299523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:43:37.975508  299523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:38.100310  299523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:38.227671  299523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:38.255808  299523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:43:38.263324  299523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:43:38.270742  299523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:43:38.362200  299523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:43:38.508826  299523 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:43:38.508895  299523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:43:38.513462  299523 start.go:564] Will wait 60s for crictl version
	I1123 08:43:38.513521  299523 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.518001  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:43:38.546359  299523 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:43:38.546458  299523 ssh_runner.go:195] Run: crio --version
	I1123 08:43:38.576915  299523 ssh_runner.go:195] Run: crio --version
	I1123 08:43:38.609985  299523 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:43:38.611229  299523 cli_runner.go:164] Run: docker network inspect no-preload-187607 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:38.631260  299523 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 08:43:38.635531  299523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:43:38.646259  299523 kubeadm.go:884] updating cluster {Name:no-preload-187607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-187607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:43:38.646403  299523 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:43:38.646446  299523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:43:38.676814  299523 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1123 08:43:38.676837  299523 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1123 08:43:38.676908  299523 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:38.676924  299523 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:38.676948  299523 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:38.676969  299523 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:38.677027  299523 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:38.676928  299523 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:38.676953  299523 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:38.677137  299523 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1123 08:43:38.678240  299523 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:38.678390  299523 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:38.678518  299523 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:38.678547  299523 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:38.678552  299523 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:38.678578  299523 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:38.678778  299523 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:38.678971  299523 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1123 08:43:38.796463  299523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:38.800324  299523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:38.802153  299523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:38.821422  299523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:38.821675  299523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:38.825439  299523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1123 08:43:38.836470  299523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:38.853366  299523 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1123 08:43:38.853427  299523 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:38.853476  299523 ssh_runner.go:195] Run: which crictl
	W1123 08:43:39.391816  288790 node_ready.go:57] node "old-k8s-version-057894" has "Ready":"False" status (will retry)
	W1123 08:43:41.890315  288790 node_ready.go:57] node "old-k8s-version-057894" has "Ready":"False" status (will retry)
	I1123 08:43:42.890174  288790 node_ready.go:49] node "old-k8s-version-057894" is "Ready"
	I1123 08:43:42.890272  288790 node_ready.go:38] duration metric: took 14.503452234s for node "old-k8s-version-057894" to be "Ready" ...
	I1123 08:43:42.890303  288790 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:43:42.890393  288790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:43:42.904523  288790 api_server.go:72] duration metric: took 15.080275575s to wait for apiserver process to appear ...
	I1123 08:43:42.904548  288790 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:43:42.904568  288790 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:43:42.909043  288790 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 08:43:42.910494  288790 api_server.go:141] control plane version: v1.28.0
	I1123 08:43:42.910519  288790 api_server.go:131] duration metric: took 5.96396ms to wait for apiserver health ...
	I1123 08:43:42.910529  288790 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:43:42.914794  288790 system_pods.go:59] 8 kube-system pods found
	I1123 08:43:42.914829  288790 system_pods.go:61] "coredns-5dd5756b68-t8zg8" [f09dcee9-59c4-42e4-b347-ad3edcaf7e99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:43:42.914837  288790 system_pods.go:61] "etcd-old-k8s-version-057894" [6d9f6e4a-1fda-454c-af4a-a063eaec8ff4] Running
	I1123 08:43:42.914851  288790 system_pods.go:61] "kindnet-lwhjw" [23c26128-6a1c-49ce-9584-c744e1c0020f] Running
	I1123 08:43:42.914857  288790 system_pods.go:61] "kube-apiserver-old-k8s-version-057894" [01709ee1-0b4b-417e-aa41-233c3eb6c516] Running
	I1123 08:43:42.914863  288790 system_pods.go:61] "kube-controller-manager-old-k8s-version-057894" [8acaebc2-556f-4af4-b611-ea475349197c] Running
	I1123 08:43:42.914868  288790 system_pods.go:61] "kube-proxy-6t2mg" [d718da2c-03e9-429b-ae93-fb6053fa65b9] Running
	I1123 08:43:42.914874  288790 system_pods.go:61] "kube-scheduler-old-k8s-version-057894" [580e3abd-6da9-4046-a64f-848ac8a47bc8] Running
	I1123 08:43:42.914886  288790 system_pods.go:61] "storage-provisioner" [8c02ffc7-dd73-4e75-b9c4-b386f8709f29] Running
	I1123 08:43:42.914894  288790 system_pods.go:74] duration metric: took 4.357625ms to wait for pod list to return data ...
	I1123 08:43:42.914902  288790 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:43:42.917156  288790 default_sa.go:45] found service account: "default"
	I1123 08:43:42.917178  288790 default_sa.go:55] duration metric: took 2.26696ms for default service account to be created ...
	I1123 08:43:42.917189  288790 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:43:42.920478  288790 system_pods.go:86] 8 kube-system pods found
	I1123 08:43:42.920510  288790 system_pods.go:89] "coredns-5dd5756b68-t8zg8" [f09dcee9-59c4-42e4-b347-ad3edcaf7e99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:43:42.920517  288790 system_pods.go:89] "etcd-old-k8s-version-057894" [6d9f6e4a-1fda-454c-af4a-a063eaec8ff4] Running
	I1123 08:43:42.920525  288790 system_pods.go:89] "kindnet-lwhjw" [23c26128-6a1c-49ce-9584-c744e1c0020f] Running
	I1123 08:43:42.920531  288790 system_pods.go:89] "kube-apiserver-old-k8s-version-057894" [01709ee1-0b4b-417e-aa41-233c3eb6c516] Running
	I1123 08:43:42.920537  288790 system_pods.go:89] "kube-controller-manager-old-k8s-version-057894" [8acaebc2-556f-4af4-b611-ea475349197c] Running
	I1123 08:43:42.920551  288790 system_pods.go:89] "kube-proxy-6t2mg" [d718da2c-03e9-429b-ae93-fb6053fa65b9] Running
	I1123 08:43:42.920561  288790 system_pods.go:89] "kube-scheduler-old-k8s-version-057894" [580e3abd-6da9-4046-a64f-848ac8a47bc8] Running
	I1123 08:43:42.920571  288790 system_pods.go:89] "storage-provisioner" [8c02ffc7-dd73-4e75-b9c4-b386f8709f29] Running
	I1123 08:43:42.920600  288790 retry.go:31] will retry after 202.588171ms: missing components: kube-dns
	I1123 08:43:38.275821  301517 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-726261:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.335740517s)
	I1123 08:43:38.275856  301517 kic.go:203] duration metric: took 4.335897676s to extract preloaded images to volume ...
	W1123 08:43:38.275963  301517 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:43:38.276010  301517 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:43:38.276057  301517 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:43:38.348745  301517 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-726261 --name default-k8s-diff-port-726261 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-726261 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-726261 --network default-k8s-diff-port-726261 --ip 192.168.85.2 --volume default-k8s-diff-port-726261:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:43:38.664640  301517 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-726261 --format={{.State.Running}}
	I1123 08:43:38.686653  301517 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-726261 --format={{.State.Status}}
	I1123 08:43:38.713149  301517 cli_runner.go:164] Run: docker exec default-k8s-diff-port-726261 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:43:38.768577  301517 oci.go:144] the created container "default-k8s-diff-port-726261" has a running status.
	I1123 08:43:38.768602  301517 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa...
	I1123 08:43:38.798839  301517 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:43:38.836560  301517 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-726261 --format={{.State.Status}}
	I1123 08:43:38.879095  301517 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:43:38.879117  301517 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-726261 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:43:38.948151  301517 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-726261 --format={{.State.Status}}
	I1123 08:43:38.978706  301517 machine.go:94] provisionDockerMachine start ...
	I1123 08:43:38.978826  301517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:43:39.009801  301517 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:39.010168  301517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1123 08:43:39.010185  301517 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:43:39.011196  301517 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44746->127.0.0.1:33101: read: connection reset by peer
	I1123 08:43:42.154342  301517 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-726261
	
	I1123 08:43:42.154372  301517 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-726261"
	I1123 08:43:42.154431  301517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:43:42.172476  301517 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:42.172669  301517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1123 08:43:42.172681  301517 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-726261 && echo "default-k8s-diff-port-726261" | sudo tee /etc/hostname
	I1123 08:43:42.324549  301517 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-726261
	
	I1123 08:43:42.324632  301517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:43:42.342632  301517 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:42.342985  301517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1123 08:43:42.343016  301517 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-726261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-726261/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-726261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:43:42.495345  301517 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:43:42.495379  301517 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:43:42.495406  301517 ubuntu.go:190] setting up certificates
	I1123 08:43:42.495416  301517 provision.go:84] configureAuth start
	I1123 08:43:42.495480  301517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-726261
	I1123 08:43:42.519044  301517 provision.go:143] copyHostCerts
	I1123 08:43:42.519128  301517 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:43:42.519147  301517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:43:42.519232  301517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:43:42.519365  301517 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:43:42.519378  301517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:43:42.519426  301517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:43:42.519541  301517 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:43:42.519553  301517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:43:42.519595  301517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:43:42.519719  301517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-726261 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-726261 localhost minikube]
	I1123 08:43:42.633993  301517 provision.go:177] copyRemoteCerts
	I1123 08:43:42.634084  301517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:43:42.634131  301517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:43:42.657852  301517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa Username:docker}
	I1123 08:43:42.763392  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:43:42.784006  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 08:43:42.807479  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:43:42.830634  301517 provision.go:87] duration metric: took 335.199818ms to configureAuth
	I1123 08:43:42.830663  301517 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:43:42.830861  301517 config.go:182] Loaded profile config "default-k8s-diff-port-726261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:43:42.831141  301517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:43:42.863272  301517 main.go:143] libmachine: Using SSH client type: native
	I1123 08:43:42.863568  301517 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1123 08:43:42.863591  301517 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:43:43.186980  301517 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:43:43.187009  301517 machine.go:97] duration metric: took 4.208279214s to provisionDockerMachine
	I1123 08:43:43.187020  301517 client.go:176] duration metric: took 9.761119473s to LocalClient.Create
	I1123 08:43:43.187042  301517 start.go:167] duration metric: took 9.761181849s to libmachine.API.Create "default-k8s-diff-port-726261"
	I1123 08:43:43.187051  301517 start.go:293] postStartSetup for "default-k8s-diff-port-726261" (driver="docker")
	I1123 08:43:43.187061  301517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:43:43.187132  301517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:43:43.187179  301517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:43:43.206364  301517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa Username:docker}
	I1123 08:43:38.872394  299523 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1123 08:43:38.872440  299523 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:38.872490  299523 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.872658  299523 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1123 08:43:38.872732  299523 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:38.872872  299523 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.899371  299523 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1123 08:43:38.899413  299523 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:38.899457  299523 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.901114  299523 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1123 08:43:38.901153  299523 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:38.901203  299523 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.914450  299523 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1123 08:43:38.914487  299523 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1123 08:43:38.914502  299523 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1123 08:43:38.914520  299523 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:38.914526  299523 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.914547  299523 ssh_runner.go:195] Run: which crictl
	I1123 08:43:38.914600  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:38.914649  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:38.914716  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:38.914759  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:38.914792  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:38.921142  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:38.921187  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:38.993435  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:38.993477  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:38.993510  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:38.993538  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:38.993582  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:39.003869  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:39.003955  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:39.037419  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1123 08:43:39.037915  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1123 08:43:39.047199  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1123 08:43:39.047289  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1123 08:43:39.047376  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1123 08:43:39.048061  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1123 08:43:39.054322  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1123 08:43:39.095621  299523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1123 08:43:39.095736  299523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:43:39.103671  299523 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:39.121399  299523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1123 08:43:39.121634  299523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1123 08:43:39.121832  299523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:43:39.122005  299523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:43:39.139590  299523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1123 08:43:39.139697  299523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1123 08:43:39.139707  299523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:43:39.139731  299523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1123 08:43:39.139792  299523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:39.139808  299523 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1123 08:43:39.139817  299523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:43:39.139824  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1123 08:43:39.139701  299523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1123 08:43:39.139879  299523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:43:39.195171  299523 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1123 08:43:39.195234  299523 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:39.195277  299523 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1123 08:43:39.195309  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1123 08:43:39.195322  299523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1123 08:43:39.195349  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1123 08:43:39.195285  299523 ssh_runner.go:195] Run: which crictl
	I1123 08:43:39.195412  299523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1123 08:43:39.195426  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1123 08:43:39.195470  299523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1123 08:43:39.195496  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1123 08:43:39.195538  299523 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1123 08:43:39.195551  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1123 08:43:39.195577  299523 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1123 08:43:39.195595  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1123 08:43:39.269706  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:39.301903  299523 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:39.301973  299523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1123 08:43:39.371644  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:39.708340  299523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1123 08:43:39.708378  299523 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:43:39.708423  299523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1123 08:43:39.708423  299523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:43:39.744342  299523 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1123 08:43:39.744452  299523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:43:40.806474  299523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.09802176s)
	I1123 08:43:40.806507  299523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1123 08:43:40.806528  299523 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.062056825s)
	I1123 08:43:40.806538  299523 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:43:40.806559  299523 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1123 08:43:40.806588  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1123 08:43:40.806593  299523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1123 08:43:41.916235  299523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.10961857s)
	I1123 08:43:41.916257  299523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1123 08:43:41.916284  299523 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:43:41.916367  299523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1123 08:43:43.216276  299523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.299881443s)
	I1123 08:43:43.216302  299523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1123 08:43:43.216326  299523 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:43:43.216367  299523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 08:43:43.309804  301517 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:43:43.313094  301517 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:43:43.313119  301517 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:43:43.313128  301517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:43:43.313180  301517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:43:43.313276  301517 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:43:43.313385  301517 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:43:43.320743  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:43:43.339050  301517 start.go:296] duration metric: took 151.98896ms for postStartSetup
	I1123 08:43:43.339397  301517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-726261
	I1123 08:43:43.358662  301517 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/config.json ...
	I1123 08:43:43.358933  301517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:43:43.358977  301517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:43:43.377334  301517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa Username:docker}
	I1123 08:43:43.474410  301517 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:43:43.478830  301517 start.go:128] duration metric: took 10.054646546s to createHost
	I1123 08:43:43.478853  301517 start.go:83] releasing machines lock for "default-k8s-diff-port-726261", held for 10.054764522s
	I1123 08:43:43.478940  301517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-726261
	I1123 08:43:43.496738  301517 ssh_runner.go:195] Run: cat /version.json
	I1123 08:43:43.496799  301517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:43:43.496826  301517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:43:43.496896  301517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:43:43.516145  301517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa Username:docker}
	I1123 08:43:43.516875  301517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa Username:docker}
	I1123 08:43:43.613640  301517 ssh_runner.go:195] Run: systemctl --version
	I1123 08:43:43.666329  301517 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:43:43.700408  301517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:43:43.705586  301517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:43:43.705654  301517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:43:43.736163  301517 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:43:43.736188  301517 start.go:496] detecting cgroup driver to use...
	I1123 08:43:43.736224  301517 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:43:43.736269  301517 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:43:43.755483  301517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:43:43.767861  301517 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:43:43.767915  301517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:43:43.785560  301517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:43:43.806504  301517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:43:43.924978  301517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:43:44.040929  301517 docker.go:234] disabling docker service ...
	I1123 08:43:44.041006  301517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:43:44.067512  301517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:43:44.084014  301517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:43:44.192787  301517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:43:44.302406  301517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:43:44.315265  301517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:43:44.330027  301517 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:43:44.330078  301517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:44.340422  301517 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:43:44.340477  301517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:44.350507  301517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:44.359749  301517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:44.369901  301517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:43:44.378743  301517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:44.387998  301517 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:44.401554  301517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:43:44.410238  301517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:43:44.417443  301517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:43:44.425112  301517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:43:44.532101  301517 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:43:45.788608  301517 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.256473591s)
	I1123 08:43:45.788640  301517 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:43:45.788725  301517 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:43:45.792600  301517 start.go:564] Will wait 60s for crictl version
	I1123 08:43:45.792657  301517 ssh_runner.go:195] Run: which crictl
	I1123 08:43:45.796541  301517 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:43:45.825830  301517 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:43:45.825912  301517 ssh_runner.go:195] Run: crio --version
	I1123 08:43:45.862969  301517 ssh_runner.go:195] Run: crio --version
	I1123 08:43:45.900126  301517 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:43:43.128567  288790 system_pods.go:86] 8 kube-system pods found
	I1123 08:43:43.128609  288790 system_pods.go:89] "coredns-5dd5756b68-t8zg8" [f09dcee9-59c4-42e4-b347-ad3edcaf7e99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:43:43.128617  288790 system_pods.go:89] "etcd-old-k8s-version-057894" [6d9f6e4a-1fda-454c-af4a-a063eaec8ff4] Running
	I1123 08:43:43.128625  288790 system_pods.go:89] "kindnet-lwhjw" [23c26128-6a1c-49ce-9584-c744e1c0020f] Running
	I1123 08:43:43.128631  288790 system_pods.go:89] "kube-apiserver-old-k8s-version-057894" [01709ee1-0b4b-417e-aa41-233c3eb6c516] Running
	I1123 08:43:43.128637  288790 system_pods.go:89] "kube-controller-manager-old-k8s-version-057894" [8acaebc2-556f-4af4-b611-ea475349197c] Running
	I1123 08:43:43.128643  288790 system_pods.go:89] "kube-proxy-6t2mg" [d718da2c-03e9-429b-ae93-fb6053fa65b9] Running
	I1123 08:43:43.128648  288790 system_pods.go:89] "kube-scheduler-old-k8s-version-057894" [580e3abd-6da9-4046-a64f-848ac8a47bc8] Running
	I1123 08:43:43.128682  288790 system_pods.go:89] "storage-provisioner" [8c02ffc7-dd73-4e75-b9c4-b386f8709f29] Running
	I1123 08:43:43.128714  288790 retry.go:31] will retry after 317.279031ms: missing components: kube-dns
	I1123 08:43:43.449669  288790 system_pods.go:86] 8 kube-system pods found
	I1123 08:43:43.449712  288790 system_pods.go:89] "coredns-5dd5756b68-t8zg8" [f09dcee9-59c4-42e4-b347-ad3edcaf7e99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:43:43.449717  288790 system_pods.go:89] "etcd-old-k8s-version-057894" [6d9f6e4a-1fda-454c-af4a-a063eaec8ff4] Running
	I1123 08:43:43.449723  288790 system_pods.go:89] "kindnet-lwhjw" [23c26128-6a1c-49ce-9584-c744e1c0020f] Running
	I1123 08:43:43.449726  288790 system_pods.go:89] "kube-apiserver-old-k8s-version-057894" [01709ee1-0b4b-417e-aa41-233c3eb6c516] Running
	I1123 08:43:43.449731  288790 system_pods.go:89] "kube-controller-manager-old-k8s-version-057894" [8acaebc2-556f-4af4-b611-ea475349197c] Running
	I1123 08:43:43.449734  288790 system_pods.go:89] "kube-proxy-6t2mg" [d718da2c-03e9-429b-ae93-fb6053fa65b9] Running
	I1123 08:43:43.449737  288790 system_pods.go:89] "kube-scheduler-old-k8s-version-057894" [580e3abd-6da9-4046-a64f-848ac8a47bc8] Running
	I1123 08:43:43.449740  288790 system_pods.go:89] "storage-provisioner" [8c02ffc7-dd73-4e75-b9c4-b386f8709f29] Running
	I1123 08:43:43.449753  288790 retry.go:31] will retry after 392.747615ms: missing components: kube-dns
	I1123 08:43:43.848996  288790 system_pods.go:86] 8 kube-system pods found
	I1123 08:43:43.849152  288790 system_pods.go:89] "coredns-5dd5756b68-t8zg8" [f09dcee9-59c4-42e4-b347-ad3edcaf7e99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:43:43.849188  288790 system_pods.go:89] "etcd-old-k8s-version-057894" [6d9f6e4a-1fda-454c-af4a-a063eaec8ff4] Running
	I1123 08:43:43.849198  288790 system_pods.go:89] "kindnet-lwhjw" [23c26128-6a1c-49ce-9584-c744e1c0020f] Running
	I1123 08:43:43.849204  288790 system_pods.go:89] "kube-apiserver-old-k8s-version-057894" [01709ee1-0b4b-417e-aa41-233c3eb6c516] Running
	I1123 08:43:43.849211  288790 system_pods.go:89] "kube-controller-manager-old-k8s-version-057894" [8acaebc2-556f-4af4-b611-ea475349197c] Running
	I1123 08:43:43.849216  288790 system_pods.go:89] "kube-proxy-6t2mg" [d718da2c-03e9-429b-ae93-fb6053fa65b9] Running
	I1123 08:43:43.849221  288790 system_pods.go:89] "kube-scheduler-old-k8s-version-057894" [580e3abd-6da9-4046-a64f-848ac8a47bc8] Running
	I1123 08:43:43.849226  288790 system_pods.go:89] "storage-provisioner" [8c02ffc7-dd73-4e75-b9c4-b386f8709f29] Running
	I1123 08:43:43.849336  288790 retry.go:31] will retry after 608.678544ms: missing components: kube-dns
	I1123 08:43:44.465621  288790 system_pods.go:86] 8 kube-system pods found
	I1123 08:43:44.465651  288790 system_pods.go:89] "coredns-5dd5756b68-t8zg8" [f09dcee9-59c4-42e4-b347-ad3edcaf7e99] Running
	I1123 08:43:44.465659  288790 system_pods.go:89] "etcd-old-k8s-version-057894" [6d9f6e4a-1fda-454c-af4a-a063eaec8ff4] Running
	I1123 08:43:44.465664  288790 system_pods.go:89] "kindnet-lwhjw" [23c26128-6a1c-49ce-9584-c744e1c0020f] Running
	I1123 08:43:44.465669  288790 system_pods.go:89] "kube-apiserver-old-k8s-version-057894" [01709ee1-0b4b-417e-aa41-233c3eb6c516] Running
	I1123 08:43:44.465674  288790 system_pods.go:89] "kube-controller-manager-old-k8s-version-057894" [8acaebc2-556f-4af4-b611-ea475349197c] Running
	I1123 08:43:44.465679  288790 system_pods.go:89] "kube-proxy-6t2mg" [d718da2c-03e9-429b-ae93-fb6053fa65b9] Running
	I1123 08:43:44.465697  288790 system_pods.go:89] "kube-scheduler-old-k8s-version-057894" [580e3abd-6da9-4046-a64f-848ac8a47bc8] Running
	I1123 08:43:44.465702  288790 system_pods.go:89] "storage-provisioner" [8c02ffc7-dd73-4e75-b9c4-b386f8709f29] Running
	I1123 08:43:44.465711  288790 system_pods.go:126] duration metric: took 1.548515465s to wait for k8s-apps to be running ...
	I1123 08:43:44.465726  288790 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:43:44.465771  288790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:43:44.480895  288790 system_svc.go:56] duration metric: took 15.161816ms WaitForService to wait for kubelet
	I1123 08:43:44.480923  288790 kubeadm.go:587] duration metric: took 16.65667909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:43:44.480944  288790 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:43:44.483808  288790 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:43:44.483836  288790 node_conditions.go:123] node cpu capacity is 8
	I1123 08:43:44.483851  288790 node_conditions.go:105] duration metric: took 2.901425ms to run NodePressure ...
	I1123 08:43:44.483866  288790 start.go:242] waiting for startup goroutines ...
	I1123 08:43:44.483876  288790 start.go:247] waiting for cluster config update ...
	I1123 08:43:44.483895  288790 start.go:256] writing updated cluster config ...
	I1123 08:43:44.484183  288790 ssh_runner.go:195] Run: rm -f paused
	I1123 08:43:44.488125  288790 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:43:44.492804  288790 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-t8zg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:44.497100  288790 pod_ready.go:94] pod "coredns-5dd5756b68-t8zg8" is "Ready"
	I1123 08:43:44.497131  288790 pod_ready.go:86] duration metric: took 4.291876ms for pod "coredns-5dd5756b68-t8zg8" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:44.499895  288790 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-057894" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:44.506978  288790 pod_ready.go:94] pod "etcd-old-k8s-version-057894" is "Ready"
	I1123 08:43:44.506998  288790 pod_ready.go:86] duration metric: took 7.082971ms for pod "etcd-old-k8s-version-057894" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:44.521374  288790 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-057894" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:44.528861  288790 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-057894" is "Ready"
	I1123 08:43:44.528924  288790 pod_ready.go:86] duration metric: took 7.487607ms for pod "kube-apiserver-old-k8s-version-057894" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:44.533269  288790 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-057894" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:44.892911  288790 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-057894" is "Ready"
	I1123 08:43:44.892937  288790 pod_ready.go:86] duration metric: took 359.650858ms for pod "kube-controller-manager-old-k8s-version-057894" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:45.092615  288790 pod_ready.go:83] waiting for pod "kube-proxy-6t2mg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:45.547278  288790 pod_ready.go:94] pod "kube-proxy-6t2mg" is "Ready"
	I1123 08:43:45.547310  288790 pod_ready.go:86] duration metric: took 454.670008ms for pod "kube-proxy-6t2mg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:45.693928  288790 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-057894" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:46.093204  288790 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-057894" is "Ready"
	I1123 08:43:46.093238  288790 pod_ready.go:86] duration metric: took 399.281677ms for pod "kube-scheduler-old-k8s-version-057894" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:43:46.093253  288790 pod_ready.go:40] duration metric: took 1.605096477s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:43:46.149003  288790 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1123 08:43:46.150637  288790 out.go:203] 
	W1123 08:43:46.151976  288790 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:43:46.153249  288790 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:43:46.154676  288790 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-057894" cluster and "default" namespace by default
	I1123 08:43:45.901319  301517 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-726261 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:43:45.923395  301517 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:43:45.928317  301517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:43:45.939765  301517 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-726261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-726261 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:43:45.939916  301517 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:43:45.939984  301517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:43:45.975204  301517 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:43:45.975223  301517 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:43:45.975259  301517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:43:46.003121  301517 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:43:46.003148  301517 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:43:46.003157  301517 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1123 08:43:46.003247  301517 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-726261 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-726261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:43:46.003320  301517 ssh_runner.go:195] Run: crio config
	I1123 08:43:46.059343  301517 cni.go:84] Creating CNI manager for ""
	I1123 08:43:46.059374  301517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:43:46.059397  301517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:43:46.059425  301517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-726261 NodeName:default-k8s-diff-port-726261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:43:46.059579  301517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-726261"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:43:46.059649  301517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:43:46.070341  301517 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:43:46.070408  301517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:43:46.080408  301517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1123 08:43:46.097173  301517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:43:46.116870  301517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1123 08:43:46.132721  301517 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:43:46.136934  301517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:43:46.149070  301517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:43:46.244146  301517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:43:46.279422  301517 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261 for IP: 192.168.85.2
	I1123 08:43:46.279448  301517 certs.go:195] generating shared ca certs ...
	I1123 08:43:46.279466  301517 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.279627  301517 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:43:46.279706  301517 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:43:46.279721  301517 certs.go:257] generating profile certs ...
	I1123 08:43:46.279786  301517 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/client.key
	I1123 08:43:46.279802  301517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/client.crt with IP's: []
	I1123 08:43:46.399389  301517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/client.crt ...
	I1123 08:43:46.399412  301517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/client.crt: {Name:mkaeb39795a4f88e7379db3a608d2918fb189d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.399556  301517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/client.key ...
	I1123 08:43:46.399569  301517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/client.key: {Name:mk7bb0eae90fce81054008eb2f72d91f4895bef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.399649  301517 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.key.a1a7b303
	I1123 08:43:46.399667  301517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.crt.a1a7b303 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:43:46.456202  301517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.crt.a1a7b303 ...
	I1123 08:43:46.456227  301517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.crt.a1a7b303: {Name:mk5c2ce41f4c95de94897178a1c24979d25bb7f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.456398  301517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.key.a1a7b303 ...
	I1123 08:43:46.456416  301517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.key.a1a7b303: {Name:mkd355919f066bc9e04f17e2b26f1adb3817ffa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.456525  301517 certs.go:382] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.crt.a1a7b303 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.crt
	I1123 08:43:46.456621  301517 certs.go:386] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.key.a1a7b303 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.key
	I1123 08:43:46.456715  301517 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/proxy-client.key
	I1123 08:43:46.456735  301517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/proxy-client.crt with IP's: []
	I1123 08:43:46.547123  301517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/proxy-client.crt ...
	I1123 08:43:46.547152  301517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/proxy-client.crt: {Name:mk50b93c41014ddb518fdc423531df5c22c1e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.547343  301517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/proxy-client.key ...
	I1123 08:43:46.547370  301517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/proxy-client.key: {Name:mk973125fb7572a821012bdb3bd83269a0e26b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:46.547624  301517 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:43:46.547668  301517 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:43:46.547679  301517 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:43:46.547722  301517 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:43:46.547761  301517 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:43:46.547788  301517 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:43:46.547859  301517 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:43:46.548449  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:43:46.565958  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:43:46.584547  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:43:46.602386  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:43:46.620965  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 08:43:46.639016  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:43:46.660149  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:43:46.679123  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:43:46.698284  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:43:46.719270  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:43:46.737315  301517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:43:46.755392  301517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:43:46.770307  301517 ssh_runner.go:195] Run: openssl version
	I1123 08:43:46.777678  301517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:43:46.786094  301517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:46.790120  301517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:46.790170  301517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:46.830514  301517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:43:46.840586  301517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:43:46.849962  301517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:43:46.853721  301517 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:43:46.853769  301517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:43:46.891304  301517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:43:46.901307  301517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:43:46.910508  301517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:43:46.914747  301517 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:43:46.914799  301517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:43:46.953587  301517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:43:46.962171  301517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:43:46.965978  301517 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:43:46.966037  301517 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-726261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-726261 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:43:46.966116  301517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:43:46.966163  301517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:43:46.995051  301517 cri.go:89] found id: ""
	I1123 08:43:46.995114  301517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:43:47.003919  301517 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:43:47.011709  301517 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:43:47.011758  301517 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:43:47.020070  301517 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:43:47.020087  301517 kubeadm.go:158] found existing configuration files:
	
	I1123 08:43:47.020126  301517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 08:43:47.027897  301517 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:43:47.027938  301517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:43:47.035105  301517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 08:43:47.043165  301517 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:43:47.043211  301517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:43:47.051558  301517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 08:43:47.059761  301517 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:43:47.059809  301517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:43:47.068423  301517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 08:43:47.076249  301517 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:43:47.076301  301517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:43:47.084269  301517 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:43:47.165321  301517 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:43:47.238308  301517 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:43:44.348185  299523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.131795032s)
	I1123 08:43:44.348219  299523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1123 08:43:44.348239  299523 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:43:44.348275  299523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 08:43:45.967925  299523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.619629213s)
	I1123 08:43:45.967962  299523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1123 08:43:45.967990  299523 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:43:45.968036  299523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1123 08:43:50.032762  299523 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.064704045s)
	I1123 08:43:50.032794  299523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 08:43:50.032816  299523 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:43:50.032857  299523 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1123 08:43:50.619533  299523 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 08:43:50.619574  299523 cache_images.go:125] Successfully loaded all cached images
	I1123 08:43:50.619581  299523 cache_images.go:94] duration metric: took 11.942728605s to LoadCachedImages
	I1123 08:43:50.619597  299523 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1123 08:43:50.619755  299523 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-187607 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-187607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:43:50.619855  299523 ssh_runner.go:195] Run: crio config
	I1123 08:43:50.668937  299523 cni.go:84] Creating CNI manager for ""
	I1123 08:43:50.668956  299523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:43:50.668971  299523 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:43:50.668996  299523 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-187607 NodeName:no-preload-187607 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:43:50.669134  299523 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-187607"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:43:50.669205  299523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:43:50.678508  299523 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 08:43:50.678637  299523 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 08:43:50.687673  299523 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1123 08:43:50.687747  299523 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1123 08:43:50.687757  299523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 08:43:50.687773  299523 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1123 08:43:50.692074  299523 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 08:43:50.692098  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1123 08:43:51.469677  299523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:43:51.483822  299523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 08:43:51.487866  299523 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 08:43:51.487901  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1123 08:43:51.599708  299523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 08:43:51.607462  299523 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 08:43:51.607523  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1123 08:43:51.828033  299523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:43:51.836425  299523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 08:43:51.849256  299523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:43:51.864116  299523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1123 08:43:51.879991  299523 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:43:51.884716  299523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:43:51.896144  299523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:43:51.991447  299523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:43:52.009958  299523 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607 for IP: 192.168.94.2
	I1123 08:43:52.009976  299523 certs.go:195] generating shared ca certs ...
	I1123 08:43:52.009996  299523 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:52.010150  299523 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:43:52.010191  299523 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:43:52.010200  299523 certs.go:257] generating profile certs ...
	I1123 08:43:52.010251  299523 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/client.key
	I1123 08:43:52.010263  299523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/client.crt with IP's: []
	I1123 08:43:52.197340  299523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/client.crt ...
	I1123 08:43:52.197386  299523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/client.crt: {Name:mk43fcfeb25121aaec6f2c84b6878a6259dbe35b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:52.197579  299523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/client.key ...
	I1123 08:43:52.197599  299523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/client.key: {Name:mkd53a766e4c773bedb721e0df5940a99ef3cf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:52.197743  299523 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.key.73dc24c7
	I1123 08:43:52.197767  299523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.crt.73dc24c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1123 08:43:52.342602  299523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.crt.73dc24c7 ...
	I1123 08:43:52.342628  299523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.crt.73dc24c7: {Name:mkb52543e0dfec7f1b26a0a9c4a91b3ee416d643 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:52.342800  299523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.key.73dc24c7 ...
	I1123 08:43:52.342822  299523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.key.73dc24c7: {Name:mk6be75fef3def0a8455989b91f7f4ec8bb1abd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:52.342938  299523 certs.go:382] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.crt.73dc24c7 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.crt
	I1123 08:43:52.343046  299523 certs.go:386] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.key.73dc24c7 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.key
	I1123 08:43:52.343147  299523 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/proxy-client.key
	I1123 08:43:52.343168  299523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/proxy-client.crt with IP's: []
	I1123 08:43:52.503777  299523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/proxy-client.crt ...
	I1123 08:43:52.503804  299523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/proxy-client.crt: {Name:mkee9409a284b9d2ef1f66e953d292f1e11ce8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:52.503957  299523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/proxy-client.key ...
	I1123 08:43:52.503967  299523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/proxy-client.key: {Name:mk13193ea0a7cccb1de8d91c784318c8033ec2a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:43:52.504149  299523 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:43:52.504188  299523 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:43:52.504196  299523 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:43:52.504221  299523 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:43:52.504243  299523 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:43:52.504267  299523 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:43:52.504304  299523 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:43:52.504967  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:43:52.525359  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:43:52.543890  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:43:52.562798  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:43:52.581298  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 08:43:52.598469  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:43:52.615601  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:43:52.631778  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:43:52.648296  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:43:52.666448  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:43:52.684157  299523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:43:52.701427  299523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:43:52.715904  299523 ssh_runner.go:195] Run: openssl version
	I1123 08:43:52.721914  299523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:43:52.731307  299523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:43:52.734938  299523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:43:52.734985  299523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:43:52.774059  299523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:43:52.785282  299523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:43:52.796089  299523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:43:52.801062  299523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:43:52.801142  299523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:43:52.841495  299523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:43:52.850485  299523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:43:52.859549  299523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:52.863737  299523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:52.863802  299523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:43:52.900563  299523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:43:52.909205  299523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:43:52.912803  299523 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:43:52.912863  299523 kubeadm.go:401] StartCluster: {Name:no-preload-187607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-187607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:43:52.912957  299523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:43:52.913008  299523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:43:52.942073  299523 cri.go:89] found id: ""
	I1123 08:43:52.942141  299523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:43:52.951476  299523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:43:52.959515  299523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:43:52.959569  299523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:43:52.968289  299523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:43:52.968310  299523 kubeadm.go:158] found existing configuration files:
	
	I1123 08:43:52.968354  299523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:43:52.976008  299523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:43:52.976060  299523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:43:52.983594  299523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:43:52.991467  299523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:43:52.991510  299523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:43:52.998591  299523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:43:53.006205  299523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:43:53.006250  299523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:43:53.013589  299523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:43:53.021476  299523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:43:53.021526  299523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:43:53.028760  299523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:43:53.072663  299523 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:43:53.072750  299523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:43:53.096674  299523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:43:53.096780  299523 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:43:53.096834  299523 kubeadm.go:319] OS: Linux
	I1123 08:43:53.096932  299523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:43:53.097022  299523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:43:53.097095  299523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:43:53.097242  299523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:43:53.097341  299523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:43:53.097418  299523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:43:53.097495  299523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:43:53.097558  299523 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:43:53.166781  299523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:43:53.166928  299523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:43:53.167064  299523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:43:53.183056  299523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:43:53.185260  299523 out.go:252]   - Generating certificates and keys ...
	I1123 08:43:53.185484  299523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:43:53.185599  299523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:43:53.714035  299523 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	
	
	==> CRI-O <==
	Nov 23 08:43:42 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:42.851830215Z" level=info msg="Starting container: b9c429e23c934a4c88ac6669980f2bcf7c83d5d19a77aeb80c849d728fe2baeb" id=38c6e5f4-c3bd-4c21-add4-6979ce607bf4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:43:42 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:42.854511876Z" level=info msg="Started container" PID=2149 containerID=b9c429e23c934a4c88ac6669980f2bcf7c83d5d19a77aeb80c849d728fe2baeb description=kube-system/coredns-5dd5756b68-t8zg8/coredns id=38c6e5f4-c3bd-4c21-add4-6979ce607bf4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=41ea67961dca725d7445ab142a1f6cbae263a5cc263914370fe050dc052ba2ee
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.635394612Z" level=info msg="Running pod sandbox: default/busybox/POD" id=40bc382f-8c8c-416b-a112-d4d3fa3e2469 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.635473626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.641194708Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1a39a7c19fb908233e8b29070d79fa89295252d0ec26cef124fbc73ff7c88219 UID:3dff7874-bfd3-4630-aa6d-acede64007db NetNS:/var/run/netns/77d70280-e240-4404-abae-3d10c301fa8e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001333b0}] Aliases:map[]}"
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.641228284Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.651934669Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1a39a7c19fb908233e8b29070d79fa89295252d0ec26cef124fbc73ff7c88219 UID:3dff7874-bfd3-4630-aa6d-acede64007db NetNS:/var/run/netns/77d70280-e240-4404-abae-3d10c301fa8e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001333b0}] Aliases:map[]}"
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.652123539Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.652909226Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.654022206Z" level=info msg="Ran pod sandbox 1a39a7c19fb908233e8b29070d79fa89295252d0ec26cef124fbc73ff7c88219 with infra container: default/busybox/POD" id=40bc382f-8c8c-416b-a112-d4d3fa3e2469 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.65538957Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f68cd5f7-59a9-4613-b8ea-642cc6354353 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.655514924Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f68cd5f7-59a9-4613-b8ea-642cc6354353 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.655562381Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f68cd5f7-59a9-4613-b8ea-642cc6354353 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.656380515Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=571ce4b8-d312-4a9f-8cd7-2600d0142823 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:43:46 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:46.657815044Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:43:49 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:49.320060767Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=571ce4b8-d312-4a9f-8cd7-2600d0142823 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:43:49 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:49.320975772Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=095f08a9-f485-4d53-b182-eea297cfac75 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:43:49 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:49.322619858Z" level=info msg="Creating container: default/busybox/busybox" id=e6df1249-a564-4177-b93f-74115520f5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:43:49 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:49.322872633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:43:49 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:49.327367798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:43:49 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:49.327930774Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:43:49 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:49.364466369Z" level=info msg="Created container 247d6173ee0b090c35c5d2ec64bc4618ac7c6a4f0d1dc8e756a5f09ed526def9: default/busybox/busybox" id=e6df1249-a564-4177-b93f-74115520f5b8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:43:49 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:49.365075583Z" level=info msg="Starting container: 247d6173ee0b090c35c5d2ec64bc4618ac7c6a4f0d1dc8e756a5f09ed526def9" id=c805689e-e443-44d1-86d9-d93c4f769c32 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:43:49 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:49.367157506Z" level=info msg="Started container" PID=2223 containerID=247d6173ee0b090c35c5d2ec64bc4618ac7c6a4f0d1dc8e756a5f09ed526def9 description=default/busybox/busybox id=c805689e-e443-44d1-86d9-d93c4f769c32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1a39a7c19fb908233e8b29070d79fa89295252d0ec26cef124fbc73ff7c88219
	Nov 23 08:43:55 old-k8s-version-057894 crio[781]: time="2025-11-23T08:43:55.41830537Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	247d6173ee0b0       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   1a39a7c19fb90       busybox                                          default
	b9c429e23c934       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 seconds ago      Running             coredns                   0                   41ea67961dca7       coredns-5dd5756b68-t8zg8                         kube-system
	a4c168b0e6a6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   7910e308e03a1       storage-provisioner                              kube-system
	0dd81c8796458       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   870786d8c49bd       kindnet-lwhjw                                    kube-system
	486d1d5178ed1       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      29 seconds ago      Running             kube-proxy                0                   bd63406e39814       kube-proxy-6t2mg                                 kube-system
	15f5d15bc2f1b       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      47 seconds ago      Running             kube-apiserver            0                   ee33ac9fad375       kube-apiserver-old-k8s-version-057894            kube-system
	2c165c789e4ac       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      47 seconds ago      Running             kube-controller-manager   0                   e322e54858741       kube-controller-manager-old-k8s-version-057894   kube-system
	035b32b374d67       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      47 seconds ago      Running             kube-scheduler            0                   7b853b6f0de4b       kube-scheduler-old-k8s-version-057894            kube-system
	438d4d2a2fc6c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      47 seconds ago      Running             etcd                      0                   0d50fa235cc24       etcd-old-k8s-version-057894                      kube-system
	
	
	==> coredns [b9c429e23c934a4c88ac6669980f2bcf7c83d5d19a77aeb80c849d728fe2baeb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59575 - 33609 "HINFO IN 529449835112005728.7418990271313110405. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.505003809s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-057894
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-057894
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=old-k8s-version-057894
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_43_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:43:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-057894
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:43:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:43:45 +0000   Sun, 23 Nov 2025 08:43:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:43:45 +0000   Sun, 23 Nov 2025 08:43:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:43:45 +0000   Sun, 23 Nov 2025 08:43:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:43:45 +0000   Sun, 23 Nov 2025 08:43:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-057894
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c7ef2a9c-d9fc-4762-980c-1ef217fcf6e1
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-t8zg8                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-old-k8s-version-057894                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-lwhjw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-057894             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-057894    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-6t2mg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-057894             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node old-k8s-version-057894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientPID
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node old-k8s-version-057894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-057894 event: Registered Node old-k8s-version-057894 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-057894 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [438d4d2a2fc6cb6a0c45626c9ea7fd7cf0e65ce9ff2325db52ece47aa5bb068a] <==
	{"level":"info","ts":"2025-11-23T08:43:09.688937Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:43:09.690889Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T08:43:09.691149Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T08:43:09.691233Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:43:09.691295Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T08:43:09.691337Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T08:43:10.67872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T08:43:10.678779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T08:43:10.678802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-23T08:43:10.678817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:10.678822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:10.678831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:10.678838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T08:43:10.679753Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:10.679862Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:43:10.679872Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:43:10.679863Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-057894 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:43:10.680137Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:43:10.68016Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T08:43:10.68037Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:10.680477Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:10.680509Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:43:10.681666Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:43:10.681983Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T08:43:45.545734Z","caller":"traceutil/trace.go:171","msg":"trace[365712465] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"136.415758ms","start":"2025-11-23T08:43:45.40929Z","end":"2025-11-23T08:43:45.545705Z","steps":["trace[365712465] 'process raft request'  (duration: 136.20912ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:43:57 up  1:26,  0 user,  load average: 4.56, 3.36, 2.15
	Linux old-k8s-version-057894 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0dd81c8796458a8b798fafcb42a5621ad7862e0615eed96f8be9112b44e20dcb] <==
	I1123 08:43:31.622961       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:43:31.623252       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:43:31.623402       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:43:31.623419       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:43:31.623438       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:43:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:43:31.994238       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:43:31.994502       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:43:31.994528       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:43:31.994756       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:43:32.295302       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:43:32.295333       1 metrics.go:72] Registering metrics
	I1123 08:43:32.295416       1 controller.go:711] "Syncing nftables rules"
	I1123 08:43:41.998057       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:43:41.998109       1 main.go:301] handling current node
	I1123 08:43:51.994034       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:43:51.994095       1 main.go:301] handling current node
	
	
	==> kube-apiserver [15f5d15bc2f1bc789a7610fa4d5f43328df38be40231a4ce6371d463207e016a] <==
	I1123 08:43:11.647345       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 08:43:11.647368       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:43:11.647376       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:43:11.647382       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:43:11.647390       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:43:11.647743       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:43:11.647822       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 08:43:11.648291       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:43:11.648308       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:43:11.827295       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:43:12.552643       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:43:12.556319       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:43:12.556339       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:43:12.929327       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:43:12.962393       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:43:13.054935       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:43:13.059496       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 08:43:13.060296       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:43:13.064401       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:43:13.579203       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:43:14.687768       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:43:14.696666       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:43:14.705197       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 08:43:27.150268       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 08:43:27.798673       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [2c165c789e4ac09d65266365be94ba1cefa19fa1025a6349ace5d838a31ea92e] <==
	I1123 08:43:27.050647       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:43:27.054673       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:43:27.154103       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 08:43:27.376574       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:43:27.392099       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:43:27.392135       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:43:27.809279       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6t2mg"
	I1123 08:43:27.812565       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lwhjw"
	I1123 08:43:27.888570       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-m8g4k"
	I1123 08:43:27.914901       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-t8zg8"
	I1123 08:43:27.941300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="786.205237ms"
	I1123 08:43:27.953621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.255159ms"
	I1123 08:43:27.985825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="32.14262ms"
	I1123 08:43:27.985943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.225µs"
	I1123 08:43:28.453484       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 08:43:28.498965       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-m8g4k"
	I1123 08:43:28.512251       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.330199ms"
	I1123 08:43:28.524297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.983833ms"
	I1123 08:43:28.525518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.277µs"
	I1123 08:43:42.492616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.757µs"
	I1123 08:43:42.514041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.745µs"
	I1123 08:43:43.872115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.461µs"
	I1123 08:43:43.906088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.104204ms"
	I1123 08:43:43.906544       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.093µs"
	I1123 08:43:47.018971       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [486d1d5178ed12dc3721599a6172c3bd2cb6eaa9135af33786db80649d74da46] <==
	I1123 08:43:28.289941       1 server_others.go:69] "Using iptables proxy"
	I1123 08:43:28.311174       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 08:43:28.387124       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:43:28.390565       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:43:28.390611       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:43:28.390620       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:43:28.390676       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:43:28.393527       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:43:28.393653       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:43:28.394709       1 config.go:188] "Starting service config controller"
	I1123 08:43:28.395307       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:43:28.395283       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:43:28.396793       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:43:28.396961       1 config.go:315] "Starting node config controller"
	I1123 08:43:28.396996       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:43:28.496617       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:43:28.497721       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 08:43:28.504174       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [035b32b374d675e497a80569ffbecf450724a42760b814a18ba27ccb89de7f8e] <==
	W1123 08:43:11.599553       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1123 08:43:11.599628       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1123 08:43:11.599556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 08:43:11.599830       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1123 08:43:11.600061       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:43:11.600103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 08:43:11.600151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 08:43:11.600121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:43:11.600084       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 08:43:11.600218       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 08:43:11.600061       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 08:43:11.600239       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 08:43:11.600158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 08:43:11.600252       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 08:43:12.541230       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 08:43:12.541266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1123 08:43:12.560752       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:43:12.560794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:43:12.603913       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 08:43:12.603950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 08:43:12.745709       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 08:43:12.745751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 08:43:12.765599       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1123 08:43:12.765633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1123 08:43:13.094257       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.037567    1390 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.038345    1390 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.815943    1390 topology_manager.go:215] "Topology Admit Handler" podUID="d718da2c-03e9-429b-ae93-fb6053fa65b9" podNamespace="kube-system" podName="kube-proxy-6t2mg"
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.824494    1390 topology_manager.go:215] "Topology Admit Handler" podUID="23c26128-6a1c-49ce-9584-c744e1c0020f" podNamespace="kube-system" podName="kindnet-lwhjw"
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.939117    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/23c26128-6a1c-49ce-9584-c744e1c0020f-cni-cfg\") pod \"kindnet-lwhjw\" (UID: \"23c26128-6a1c-49ce-9584-c744e1c0020f\") " pod="kube-system/kindnet-lwhjw"
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.939190    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23c26128-6a1c-49ce-9584-c744e1c0020f-xtables-lock\") pod \"kindnet-lwhjw\" (UID: \"23c26128-6a1c-49ce-9584-c744e1c0020f\") " pod="kube-system/kindnet-lwhjw"
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.939222    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d718da2c-03e9-429b-ae93-fb6053fa65b9-lib-modules\") pod \"kube-proxy-6t2mg\" (UID: \"d718da2c-03e9-429b-ae93-fb6053fa65b9\") " pod="kube-system/kube-proxy-6t2mg"
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.939262    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d718da2c-03e9-429b-ae93-fb6053fa65b9-kube-proxy\") pod \"kube-proxy-6t2mg\" (UID: \"d718da2c-03e9-429b-ae93-fb6053fa65b9\") " pod="kube-system/kube-proxy-6t2mg"
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.939295    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d718da2c-03e9-429b-ae93-fb6053fa65b9-xtables-lock\") pod \"kube-proxy-6t2mg\" (UID: \"d718da2c-03e9-429b-ae93-fb6053fa65b9\") " pod="kube-system/kube-proxy-6t2mg"
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.939328    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk7q5\" (UniqueName: \"kubernetes.io/projected/d718da2c-03e9-429b-ae93-fb6053fa65b9-kube-api-access-hk7q5\") pod \"kube-proxy-6t2mg\" (UID: \"d718da2c-03e9-429b-ae93-fb6053fa65b9\") " pod="kube-system/kube-proxy-6t2mg"
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.939356    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23c26128-6a1c-49ce-9584-c744e1c0020f-lib-modules\") pod \"kindnet-lwhjw\" (UID: \"23c26128-6a1c-49ce-9584-c744e1c0020f\") " pod="kube-system/kindnet-lwhjw"
	Nov 23 08:43:27 old-k8s-version-057894 kubelet[1390]: I1123 08:43:27.939390    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk6fm\" (UniqueName: \"kubernetes.io/projected/23c26128-6a1c-49ce-9584-c744e1c0020f-kube-api-access-pk6fm\") pod \"kindnet-lwhjw\" (UID: \"23c26128-6a1c-49ce-9584-c744e1c0020f\") " pod="kube-system/kindnet-lwhjw"
	Nov 23 08:43:28 old-k8s-version-057894 kubelet[1390]: I1123 08:43:28.829630    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6t2mg" podStartSLOduration=1.829576289 podCreationTimestamp="2025-11-23 08:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:28.828506304 +0000 UTC m=+14.164739929" watchObservedRunningTime="2025-11-23 08:43:28.829576289 +0000 UTC m=+14.165809913"
	Nov 23 08:43:31 old-k8s-version-057894 kubelet[1390]: I1123 08:43:31.835618    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-lwhjw" podStartSLOduration=1.600059619 podCreationTimestamp="2025-11-23 08:43:27 +0000 UTC" firstStartedPulling="2025-11-23 08:43:28.157918113 +0000 UTC m=+13.494151720" lastFinishedPulling="2025-11-23 08:43:31.393423904 +0000 UTC m=+16.729657512" observedRunningTime="2025-11-23 08:43:31.835283708 +0000 UTC m=+17.171517333" watchObservedRunningTime="2025-11-23 08:43:31.835565411 +0000 UTC m=+17.171799040"
	Nov 23 08:43:42 old-k8s-version-057894 kubelet[1390]: I1123 08:43:42.462233    1390 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 08:43:42 old-k8s-version-057894 kubelet[1390]: I1123 08:43:42.490059    1390 topology_manager.go:215] "Topology Admit Handler" podUID="8c02ffc7-dd73-4e75-b9c4-b386f8709f29" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 08:43:42 old-k8s-version-057894 kubelet[1390]: I1123 08:43:42.491603    1390 topology_manager.go:215] "Topology Admit Handler" podUID="f09dcee9-59c4-42e4-b347-ad3edcaf7e99" podNamespace="kube-system" podName="coredns-5dd5756b68-t8zg8"
	Nov 23 08:43:42 old-k8s-version-057894 kubelet[1390]: I1123 08:43:42.647093    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znmlq\" (UniqueName: \"kubernetes.io/projected/8c02ffc7-dd73-4e75-b9c4-b386f8709f29-kube-api-access-znmlq\") pod \"storage-provisioner\" (UID: \"8c02ffc7-dd73-4e75-b9c4-b386f8709f29\") " pod="kube-system/storage-provisioner"
	Nov 23 08:43:42 old-k8s-version-057894 kubelet[1390]: I1123 08:43:42.647163    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxsl8\" (UniqueName: \"kubernetes.io/projected/f09dcee9-59c4-42e4-b347-ad3edcaf7e99-kube-api-access-gxsl8\") pod \"coredns-5dd5756b68-t8zg8\" (UID: \"f09dcee9-59c4-42e4-b347-ad3edcaf7e99\") " pod="kube-system/coredns-5dd5756b68-t8zg8"
	Nov 23 08:43:42 old-k8s-version-057894 kubelet[1390]: I1123 08:43:42.647234    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8c02ffc7-dd73-4e75-b9c4-b386f8709f29-tmp\") pod \"storage-provisioner\" (UID: \"8c02ffc7-dd73-4e75-b9c4-b386f8709f29\") " pod="kube-system/storage-provisioner"
	Nov 23 08:43:42 old-k8s-version-057894 kubelet[1390]: I1123 08:43:42.647262    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f09dcee9-59c4-42e4-b347-ad3edcaf7e99-config-volume\") pod \"coredns-5dd5756b68-t8zg8\" (UID: \"f09dcee9-59c4-42e4-b347-ad3edcaf7e99\") " pod="kube-system/coredns-5dd5756b68-t8zg8"
	Nov 23 08:43:43 old-k8s-version-057894 kubelet[1390]: I1123 08:43:43.870723    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.870638206 podCreationTimestamp="2025-11-23 08:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:42.865723031 +0000 UTC m=+28.201956658" watchObservedRunningTime="2025-11-23 08:43:43.870638206 +0000 UTC m=+29.206872224"
	Nov 23 08:43:43 old-k8s-version-057894 kubelet[1390]: I1123 08:43:43.870832    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-t8zg8" podStartSLOduration=16.870803097 podCreationTimestamp="2025-11-23 08:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:43.870373896 +0000 UTC m=+29.206607522" watchObservedRunningTime="2025-11-23 08:43:43.870803097 +0000 UTC m=+29.207036721"
	Nov 23 08:43:46 old-k8s-version-057894 kubelet[1390]: I1123 08:43:46.332970    1390 topology_manager.go:215] "Topology Admit Handler" podUID="3dff7874-bfd3-4630-aa6d-acede64007db" podNamespace="default" podName="busybox"
	Nov 23 08:43:46 old-k8s-version-057894 kubelet[1390]: I1123 08:43:46.470828    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppfzp\" (UniqueName: \"kubernetes.io/projected/3dff7874-bfd3-4630-aa6d-acede64007db-kube-api-access-ppfzp\") pod \"busybox\" (UID: \"3dff7874-bfd3-4630-aa6d-acede64007db\") " pod="default/busybox"
	
	
	==> storage-provisioner [a4c168b0e6a6f787054224be46e8ffe482ec928577a2b5aceb00d439b86e7eae] <==
	I1123 08:43:42.855001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:43:42.866649       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:43:42.866745       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:43:42.875564       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:43:42.876109       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18c4da37-3156-4c26-a03d-1ad0569c542a", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-057894_bbd2850b-432d-488a-b0f5-1b67c1fbd992 became leader
	I1123 08:43:42.876273       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-057894_bbd2850b-432d-488a-b0f5-1b67c1fbd992!
	I1123 08:43:42.976551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-057894_bbd2850b-432d-488a-b0f5-1b67c1fbd992!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-057894 -n old-k8s-version-057894
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-057894 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-653361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-653361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (287.439241ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:44:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-653361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-653361
helpers_test.go:243: (dbg) docker inspect newest-cni-653361:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20",
	        "Created": "2025-11-23T08:44:05.576543108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 311947,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:44:05.619066277Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/hostname",
	        "HostsPath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/hosts",
	        "LogPath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20-json.log",
	        "Name": "/newest-cni-653361",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-653361:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-653361",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20",
	                "LowerDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-653361",
	                "Source": "/var/lib/docker/volumes/newest-cni-653361/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-653361",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-653361",
	                "name.minikube.sigs.k8s.io": "newest-cni-653361",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3d4c29606112504afb689ccbe377acc17bc37dd3aad10d63546978cb288a0727",
	            "SandboxKey": "/var/run/docker/netns/3d4c29606112",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-653361": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1a370c90bc560610803aaed5e7a991a85cacb2851129df90c5009b204f306e40",
	                    "EndpointID": "e93bd8ccf8a5ee60e51be02f3d4d86b7895b9bb91a13039688ce5d2018a69a7e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3e:f9:e7:9e:8d:6a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-653361",
	                        "780e326c9456"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653361 -n newest-cni-653361
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-653361 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-653361 logs -n 25: (1.089379898s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-351793 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                        │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo docker system info                                                                                                                                                                                                      │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo containerd config dump                                                                                                                                                                                                  │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo crio config                                                                                                                                                                                                             │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p bridge-351793                                                                                                                                                                                                                              │ bridge-351793          │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ stop    │ -p old-k8s-version-057894 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-057894 │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361      │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-057894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-057894 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p old-k8s-version-057894 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-057894 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-653361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-653361      │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:44:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:44:16.418060  314636 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:44:16.418184  314636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:16.418195  314636 out.go:374] Setting ErrFile to fd 2...
	I1123 08:44:16.418200  314636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:16.418484  314636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:44:16.419017  314636 out.go:368] Setting JSON to false
	I1123 08:44:16.420248  314636 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5203,"bootTime":1763882253,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:44:16.420302  314636 start.go:143] virtualization: kvm guest
	I1123 08:44:16.422513  314636 out.go:179] * [old-k8s-version-057894] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:44:16.426605  314636 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:44:16.426606  314636 notify.go:221] Checking for updates...
	I1123 08:44:16.428841  314636 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:44:16.429902  314636 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:16.430819  314636 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:44:16.431702  314636 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:44:16.432602  314636 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:44:16.434097  314636 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:44:16.435753  314636 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 08:44:16.436562  314636 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:44:16.462564  314636 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:44:16.462643  314636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:16.532612  314636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:44:16.521915311 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:16.532791  314636 docker.go:319] overlay module found
	I1123 08:44:16.535057  314636 out.go:179] * Using the docker driver based on existing profile
	I1123 08:44:16.536052  314636 start.go:309] selected driver: docker
	I1123 08:44:16.536065  314636 start.go:927] validating driver "docker" against &{Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:16.536188  314636 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:44:16.536795  314636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:16.600833  314636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:44:16.59146408 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:16.601200  314636 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:16.601242  314636 cni.go:84] Creating CNI manager for ""
	I1123 08:44:16.601318  314636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:44:16.601385  314636 start.go:353] cluster config:
	{Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:16.603067  314636 out.go:179] * Starting "old-k8s-version-057894" primary control-plane node in "old-k8s-version-057894" cluster
	I1123 08:44:16.603971  314636 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:44:16.605060  314636 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:44:16.606152  314636 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:44:16.606180  314636 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1123 08:44:16.606205  314636 cache.go:65] Caching tarball of preloaded images
	I1123 08:44:16.606246  314636 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:44:16.606294  314636 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:44:16.606309  314636 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 08:44:16.606401  314636 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/config.json ...
	I1123 08:44:16.629025  314636 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:44:16.629041  314636 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:44:16.629055  314636 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:44:16.629079  314636 start.go:360] acquireMachinesLock for old-k8s-version-057894: {Name:mk24ea9464b285d5ccac107c6969c1ae844d534b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:16.629128  314636 start.go:364] duration metric: took 33.636µs to acquireMachinesLock for "old-k8s-version-057894"
	I1123 08:44:16.629143  314636 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:44:16.629151  314636 fix.go:54] fixHost starting: 
	I1123 08:44:16.629339  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:16.650710  314636 fix.go:112] recreateIfNeeded on old-k8s-version-057894: state=Stopped err=<nil>
	W1123 08:44:16.650739  314636 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 08:44:13.642139  301517 node_ready.go:57] node "default-k8s-diff-port-726261" has "Ready":"False" status (will retry)
	W1123 08:44:16.142649  301517 node_ready.go:57] node "default-k8s-diff-port-726261" has "Ready":"False" status (will retry)
	W1123 08:44:13.972407  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	W1123 08:44:15.972617  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	W1123 08:44:18.472348  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	I1123 08:44:20.247025  310933 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:44:20.247135  310933 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:44:20.247262  310933 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:44:20.247346  310933 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:44:20.247409  310933 kubeadm.go:319] OS: Linux
	I1123 08:44:20.247472  310933 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:44:20.247514  310933 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:44:20.247591  310933 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:44:20.247675  310933 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:44:20.247768  310933 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:44:20.247846  310933 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:44:20.247920  310933 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:44:20.247982  310933 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:44:20.248089  310933 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:44:20.248229  310933 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:44:20.248363  310933 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:44:20.248480  310933 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:44:20.249638  310933 out.go:252]   - Generating certificates and keys ...
	I1123 08:44:20.249750  310933 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:44:20.249829  310933 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:44:20.249910  310933 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:44:20.249991  310933 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:44:20.250044  310933 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:44:20.250090  310933 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:44:20.250160  310933 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:44:20.250299  310933 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-653361] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:44:20.250384  310933 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:44:20.250497  310933 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-653361] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:44:20.250558  310933 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:44:20.250625  310933 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:44:20.250670  310933 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:44:20.250763  310933 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:44:20.250844  310933 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:44:20.250930  310933 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:44:20.251013  310933 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:44:20.251103  310933 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:44:20.251193  310933 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:44:20.251292  310933 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:44:20.251392  310933 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:44:20.253553  310933 out.go:252]   - Booting up control plane ...
	I1123 08:44:20.253634  310933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:44:20.253732  310933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:44:20.253862  310933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:44:20.253996  310933 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:44:20.254161  310933 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:44:20.254325  310933 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:44:20.254452  310933 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:44:20.254510  310933 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:44:20.254656  310933 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:44:20.254855  310933 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:44:20.254947  310933 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.635187ms
	I1123 08:44:20.255081  310933 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:44:20.255191  310933 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1123 08:44:20.255310  310933 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:44:20.255410  310933 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:44:20.255509  310933 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.543941134s
	I1123 08:44:20.255592  310933 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.711134146s
	I1123 08:44:20.255672  310933 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.50190822s
	I1123 08:44:20.255836  310933 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:44:20.255991  310933 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:44:20.256065  310933 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:44:20.256328  310933 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-653361 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:44:20.256394  310933 kubeadm.go:319] [bootstrap-token] Using token: 0wyvo8.gmxzh0st4hzmadft
	I1123 08:44:20.258116  310933 out.go:252]   - Configuring RBAC rules ...
	I1123 08:44:20.258221  310933 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:44:20.258316  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:44:20.258491  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:44:20.258665  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:44:20.258863  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:44:20.258955  310933 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:44:20.259072  310933 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:44:20.259116  310933 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:44:20.259177  310933 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:44:20.259186  310933 kubeadm.go:319] 
	I1123 08:44:20.259252  310933 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:44:20.259265  310933 kubeadm.go:319] 
	I1123 08:44:20.259329  310933 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:44:20.259335  310933 kubeadm.go:319] 
	I1123 08:44:20.259360  310933 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:44:20.259415  310933 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:44:20.259464  310933 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:44:20.259470  310933 kubeadm.go:319] 
	I1123 08:44:20.259529  310933 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:44:20.259536  310933 kubeadm.go:319] 
	I1123 08:44:20.259575  310933 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:44:20.259581  310933 kubeadm.go:319] 
	I1123 08:44:20.259624  310933 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:44:20.259706  310933 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:44:20.259768  310933 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:44:20.259774  310933 kubeadm.go:319] 
	I1123 08:44:20.259848  310933 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:44:20.259953  310933 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:44:20.259960  310933 kubeadm.go:319] 
	I1123 08:44:20.260033  310933 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0wyvo8.gmxzh0st4hzmadft \
	I1123 08:44:20.260124  310933 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c \
	I1123 08:44:20.260143  310933 kubeadm.go:319] 	--control-plane 
	I1123 08:44:20.260152  310933 kubeadm.go:319] 
	I1123 08:44:20.260224  310933 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:44:20.260230  310933 kubeadm.go:319] 
	I1123 08:44:20.260302  310933 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0wyvo8.gmxzh0st4hzmadft \
	I1123 08:44:20.260407  310933 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c 
	I1123 08:44:20.260418  310933 cni.go:84] Creating CNI manager for ""
	I1123 08:44:20.260424  310933 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:44:20.261586  310933 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:44:16.654090  314636 out.go:252] * Restarting existing docker container for "old-k8s-version-057894" ...
	I1123 08:44:16.654186  314636 cli_runner.go:164] Run: docker start old-k8s-version-057894
	I1123 08:44:16.984977  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:17.023793  314636 kic.go:430] container "old-k8s-version-057894" state is running.
	I1123 08:44:17.024222  314636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-057894
	I1123 08:44:17.045881  314636 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/config.json ...
	I1123 08:44:17.046142  314636 machine.go:94] provisionDockerMachine start ...
	I1123 08:44:17.046245  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:17.063894  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:17.064129  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:17.064143  314636 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:44:17.064767  314636 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50416->127.0.0.1:33111: read: connection reset by peer
	I1123 08:44:20.207281  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-057894
	
	I1123 08:44:20.207320  314636 ubuntu.go:182] provisioning hostname "old-k8s-version-057894"
	I1123 08:44:20.207405  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.225411  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:20.225640  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:20.225654  314636 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-057894 && echo "old-k8s-version-057894" | sudo tee /etc/hostname
	I1123 08:44:20.384120  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-057894
	
	I1123 08:44:20.384196  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.401285  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:20.401561  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:20.401587  314636 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-057894' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-057894/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-057894' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:44:20.553936  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:44:20.553968  314636 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:44:20.554005  314636 ubuntu.go:190] setting up certificates
	I1123 08:44:20.554025  314636 provision.go:84] configureAuth start
	I1123 08:44:20.554402  314636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-057894
	I1123 08:44:20.592136  314636 provision.go:143] copyHostCerts
	I1123 08:44:20.592213  314636 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:44:20.592232  314636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:44:20.592312  314636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:44:20.592436  314636 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:44:20.592447  314636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:44:20.592484  314636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:44:20.592573  314636 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:44:20.592582  314636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:44:20.592614  314636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:44:20.592714  314636 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-057894 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-057894]
	I1123 08:44:20.652221  314636 provision.go:177] copyRemoteCerts
	I1123 08:44:20.652281  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:44:20.652322  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.672322  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:20.773680  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:44:20.790760  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 08:44:20.807788  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:44:20.824033  314636 provision.go:87] duration metric: took 269.99842ms to configureAuth
	I1123 08:44:20.824051  314636 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:44:20.824240  314636 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:44:20.824327  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.842425  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:20.842737  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:20.842764  314636 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:44:21.173321  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:44:21.173348  314636 machine.go:97] duration metric: took 4.127187999s to provisionDockerMachine
	I1123 08:44:21.173360  314636 start.go:293] postStartSetup for "old-k8s-version-057894" (driver="docker")
	I1123 08:44:21.173371  314636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:44:21.173426  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:44:21.173498  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.192289  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:21.293367  314636 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:44:21.296864  314636 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:44:21.296893  314636 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:44:21.296904  314636 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:44:21.296969  314636 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:44:21.297081  314636 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:44:21.297209  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:44:21.304802  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:44:21.321290  314636 start.go:296] duration metric: took 147.91911ms for postStartSetup
	I1123 08:44:21.321383  314636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:44:21.321433  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.339672  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:18.642288  301517 node_ready.go:49] node "default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:18.642321  301517 node_ready.go:38] duration metric: took 11.503564271s for node "default-k8s-diff-port-726261" to be "Ready" ...
	I1123 08:44:18.642339  301517 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:18.642388  301517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:44:18.658421  301517 api_server.go:72] duration metric: took 11.812908089s to wait for apiserver process to appear ...
	I1123 08:44:18.658458  301517 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:44:18.658477  301517 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:44:18.663288  301517 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:44:18.664345  301517 api_server.go:141] control plane version: v1.34.1
	I1123 08:44:18.664369  301517 api_server.go:131] duration metric: took 5.904232ms to wait for apiserver health ...
	I1123 08:44:18.664377  301517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:18.668437  301517 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:18.668483  301517 system_pods.go:61] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:18.668492  301517 system_pods.go:61] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:18.668501  301517 system_pods.go:61] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:18.668511  301517 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:18.668516  301517 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:18.668521  301517 system_pods.go:61] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:18.668529  301517 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:18.668535  301517 system_pods.go:61] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:18.668543  301517 system_pods.go:74] duration metric: took 4.160794ms to wait for pod list to return data ...
	I1123 08:44:18.668557  301517 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:18.670768  301517 default_sa.go:45] found service account: "default"
	I1123 08:44:18.670786  301517 default_sa.go:55] duration metric: took 2.223017ms for default service account to be created ...
	I1123 08:44:18.670796  301517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:44:18.673368  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:18.673401  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:18.673412  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:18.673425  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:18.673434  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:18.673449  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:18.673462  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:18.673471  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:18.673479  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:18.673510  301517 retry.go:31] will retry after 273.138898ms: missing components: kube-dns
	I1123 08:44:18.950428  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:18.950462  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:18.950468  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:18.950474  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:18.950477  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:18.950486  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:18.950492  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:18.950497  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:18.950505  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:18.950527  301517 retry.go:31] will retry after 324.368056ms: missing components: kube-dns
	I1123 08:44:19.282612  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:19.282655  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:19.282664  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:19.282681  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:19.282711  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:19.282717  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:19.282722  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:19.282728  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:19.282735  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:19.282752  301517 retry.go:31] will retry after 341.175275ms: missing components: kube-dns
	I1123 08:44:19.628067  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:19.628106  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:19.628115  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:19.628124  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:19.628131  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:19.628136  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:19.628141  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:19.628147  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:19.628151  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:19.628166  301517 retry.go:31] will retry after 385.479643ms: missing components: kube-dns
	I1123 08:44:20.019211  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:20.019262  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running
	I1123 08:44:20.019271  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:20.019278  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:20.019290  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:20.019297  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:20.019302  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:20.019307  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:20.019313  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running
	I1123 08:44:20.019328  301517 system_pods.go:126] duration metric: took 1.348525547s to wait for k8s-apps to be running ...
	I1123 08:44:20.019337  301517 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:44:20.019398  301517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:20.032534  301517 system_svc.go:56] duration metric: took 13.191771ms WaitForService to wait for kubelet
	I1123 08:44:20.032556  301517 kubeadm.go:587] duration metric: took 13.187050567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:20.032570  301517 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:20.035222  301517 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:20.035255  301517 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:20.035272  301517 node_conditions.go:105] duration metric: took 2.697218ms to run NodePressure ...
	I1123 08:44:20.035284  301517 start.go:242] waiting for startup goroutines ...
	I1123 08:44:20.035296  301517 start.go:247] waiting for cluster config update ...
	I1123 08:44:20.035308  301517 start.go:256] writing updated cluster config ...
	I1123 08:44:20.035582  301517 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:20.039148  301517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:20.042349  301517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.046265  301517 pod_ready.go:94] pod "coredns-66bc5c9577-8f8f5" is "Ready"
	I1123 08:44:20.046284  301517 pod_ready.go:86] duration metric: took 3.909737ms for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.048015  301517 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.051563  301517 pod_ready.go:94] pod "etcd-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:20.051582  301517 pod_ready.go:86] duration metric: took 3.548608ms for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.053391  301517 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.058527  301517 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:20.058551  301517 pod_ready.go:86] duration metric: took 5.13961ms for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.060160  301517 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.443432  301517 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:20.443460  301517 pod_ready.go:86] duration metric: took 383.282782ms for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.644026  301517 pod_ready.go:83] waiting for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.043432  301517 pod_ready.go:94] pod "kube-proxy-sn4sp" is "Ready"
	I1123 08:44:21.043456  301517 pod_ready.go:86] duration metric: took 399.407792ms for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.244389  301517 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.644143  301517 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:21.644175  301517 pod_ready.go:86] duration metric: took 399.759889ms for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.644190  301517 pod_ready.go:40] duration metric: took 1.605017538s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:21.697309  301517 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:44:21.699630  301517 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-726261" cluster and "default" namespace by default
	I1123 08:44:21.437237  314636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:44:21.441902  314636 fix.go:56] duration metric: took 4.812745863s for fixHost
	I1123 08:44:21.441927  314636 start.go:83] releasing machines lock for "old-k8s-version-057894", held for 4.812789083s
	I1123 08:44:21.441996  314636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-057894
	I1123 08:44:21.461031  314636 ssh_runner.go:195] Run: cat /version.json
	I1123 08:44:21.461084  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.461105  314636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:44:21.461168  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.480163  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:21.480473  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:21.634506  314636 ssh_runner.go:195] Run: systemctl --version
	I1123 08:44:21.641286  314636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:44:21.685409  314636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:44:21.690169  314636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:44:21.690228  314636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:44:21.698154  314636 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:44:21.698171  314636 start.go:496] detecting cgroup driver to use...
	I1123 08:44:21.698198  314636 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:44:21.698236  314636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:44:21.711950  314636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:44:21.726746  314636 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:44:21.726796  314636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:44:21.741579  314636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:44:21.754743  314636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:44:21.841306  314636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:44:21.930875  314636 docker.go:234] disabling docker service ...
	I1123 08:44:21.930940  314636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:44:21.944498  314636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:44:21.957091  314636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:44:22.052960  314636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:44:22.135533  314636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:44:22.147635  314636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:44:22.163753  314636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1123 08:44:22.163824  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.173900  314636 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:44:22.173957  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.184459  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.193984  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.202599  314636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:44:22.212728  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.221809  314636 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.229818  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.238209  314636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:44:22.245345  314636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:44:22.252238  314636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:22.338869  314636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:44:22.479721  314636 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:44:22.479814  314636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:44:22.483897  314636 start.go:564] Will wait 60s for crictl version
	I1123 08:44:22.483945  314636 ssh_runner.go:195] Run: which crictl
	I1123 08:44:22.487547  314636 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:44:22.519750  314636 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:44:22.519832  314636 ssh_runner.go:195] Run: crio --version
	I1123 08:44:22.551262  314636 ssh_runner.go:195] Run: crio --version
	I1123 08:44:22.580715  314636 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1123 08:44:22.581831  314636 cli_runner.go:164] Run: docker network inspect old-k8s-version-057894 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:44:22.599083  314636 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:44:22.603144  314636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:22.613848  314636 kubeadm.go:884] updating cluster {Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:44:22.613944  314636 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:44:22.613998  314636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:44:22.647518  314636 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:44:22.647542  314636 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:44:22.647616  314636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:44:22.675816  314636 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:44:22.675840  314636 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:44:22.675848  314636 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1123 08:44:22.675954  314636 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-057894 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:44:22.676050  314636 ssh_runner.go:195] Run: crio config
	I1123 08:44:22.733251  314636 cni.go:84] Creating CNI manager for ""
	I1123 08:44:22.733275  314636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:44:22.733293  314636 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:44:22.733329  314636 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-057894 NodeName:old-k8s-version-057894 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:44:22.733544  314636 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-057894"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:44:22.733619  314636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 08:44:22.744170  314636 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:44:22.744228  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:44:22.752661  314636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1123 08:44:22.768641  314636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:44:22.782019  314636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1123 08:44:22.796003  314636 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:44:22.800977  314636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:22.813321  314636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:22.903093  314636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:22.925936  314636 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894 for IP: 192.168.76.2
	I1123 08:44:22.925957  314636 certs.go:195] generating shared ca certs ...
	I1123 08:44:22.925976  314636 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:22.926151  314636 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:44:22.926214  314636 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:44:22.926226  314636 certs.go:257] generating profile certs ...
	I1123 08:44:22.926325  314636 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/client.key
	I1123 08:44:22.926393  314636 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/apiserver.key.249ce811
	I1123 08:44:22.926443  314636 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/proxy-client.key
	I1123 08:44:22.926574  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:44:22.926615  314636 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:44:22.926627  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:44:22.926663  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:44:22.926714  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:44:22.926747  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:44:22.926807  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:44:22.927577  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:44:22.946066  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:44:22.965035  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:44:22.983167  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:44:23.004136  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 08:44:23.025198  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:44:23.041558  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:44:23.058566  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:44:23.074677  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:44:23.091292  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:44:23.107997  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:44:23.125834  314636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:44:23.138442  314636 ssh_runner.go:195] Run: openssl version
	I1123 08:44:23.144241  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:44:23.152727  314636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:23.156543  314636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:23.156592  314636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:23.194469  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:44:23.202009  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:44:23.210602  314636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:44:23.214015  314636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:44:23.214065  314636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:44:23.247847  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:44:23.255072  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:44:23.263009  314636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:44:23.266387  314636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:44:23.266430  314636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:44:23.300576  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:44:23.308629  314636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:44:23.312141  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:44:23.346219  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:44:23.381481  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:44:23.417721  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:44:23.461311  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:44:23.504474  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:44:23.560218  314636 kubeadm.go:401] StartCluster: {Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:23.560327  314636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:44:23.560395  314636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:44:23.599229  314636 cri.go:89] found id: "35f8086b1de4e31006310dbc9225c47fc7ce015e3238258161e81fc2d1c7f4bd"
	I1123 08:44:23.599258  314636 cri.go:89] found id: "62bca8b239fd282ce38b86b21b9897cfdd1cd66996c68c577fb4d9a16baca0f8"
	I1123 08:44:23.599264  314636 cri.go:89] found id: "46e574a85cdd50d2ed3dfea9bf9e72260185653dd7313da97ccc3c575be7c1e6"
	I1123 08:44:23.599270  314636 cri.go:89] found id: "5ed59b21f5fe5a105c3165b1f30786d03b6ba7fda1e27532fd0541a8a4b0df67"
	I1123 08:44:23.599284  314636 cri.go:89] found id: ""
	I1123 08:44:23.599331  314636 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 08:44:23.612757  314636 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:44:23Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:44:23.612946  314636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:44:23.621797  314636 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:44:23.621814  314636 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:44:23.621861  314636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:44:23.630422  314636 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:44:23.631238  314636 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-057894" does not appear in /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:23.631790  314636 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-10964/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-057894" cluster setting kubeconfig missing "old-k8s-version-057894" context setting]
	I1123 08:44:23.632584  314636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:23.634289  314636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:44:23.643154  314636 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 08:44:23.643181  314636 kubeadm.go:602] duration metric: took 21.360308ms to restartPrimaryControlPlane
	I1123 08:44:23.643190  314636 kubeadm.go:403] duration metric: took 82.98118ms to StartCluster
	I1123 08:44:23.643205  314636 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:23.643264  314636 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:23.644605  314636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:23.644839  314636 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:44:23.644977  314636 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:23.645117  314636 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-057894"
	I1123 08:44:23.645134  314636 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-057894"
	W1123 08:44:23.645142  314636 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:44:23.645143  314636 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:44:23.645155  314636 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-057894"
	I1123 08:44:23.645176  314636 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-057894"
	I1123 08:44:23.645188  314636 host.go:66] Checking if "old-k8s-version-057894" exists ...
	I1123 08:44:23.645252  314636 addons.go:70] Setting dashboard=true in profile "old-k8s-version-057894"
	I1123 08:44:23.645268  314636 addons.go:239] Setting addon dashboard=true in "old-k8s-version-057894"
	W1123 08:44:23.645275  314636 addons.go:248] addon dashboard should already be in state true
	I1123 08:44:23.645311  314636 host.go:66] Checking if "old-k8s-version-057894" exists ...
	I1123 08:44:23.645517  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.645713  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.645745  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.649572  314636 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:23.652170  314636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:23.673583  314636 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:23.674718  314636 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:23.674737  314636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:23.674752  314636 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:44:23.674789  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:23.675471  314636 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-057894"
	W1123 08:44:23.675491  314636 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:44:23.675516  314636 host.go:66] Checking if "old-k8s-version-057894" exists ...
	I1123 08:44:23.676047  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.679811  314636 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1123 08:44:20.472384  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	I1123 08:44:22.472469  299523 node_ready.go:49] node "no-preload-187607" is "Ready"
	I1123 08:44:22.472501  299523 node_ready.go:38] duration metric: took 13.003189401s for node "no-preload-187607" to be "Ready" ...
	I1123 08:44:22.472517  299523 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:22.472570  299523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:44:22.485582  299523 api_server.go:72] duration metric: took 13.302203208s to wait for apiserver process to appear ...
	I1123 08:44:22.485608  299523 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:44:22.485625  299523 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:44:22.490169  299523 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:44:22.491237  299523 api_server.go:141] control plane version: v1.34.1
	I1123 08:44:22.491264  299523 api_server.go:131] duration metric: took 5.649677ms to wait for apiserver health ...
	I1123 08:44:22.491274  299523 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:22.496993  299523 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:22.497040  299523 system_pods.go:61] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:22.497056  299523 system_pods.go:61] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running
	I1123 08:44:22.497068  299523 system_pods.go:61] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running
	I1123 08:44:22.497075  299523 system_pods.go:61] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running
	I1123 08:44:22.497090  299523 system_pods.go:61] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running
	I1123 08:44:22.497097  299523 system_pods.go:61] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running
	I1123 08:44:22.497103  299523 system_pods.go:61] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running
	I1123 08:44:22.497119  299523 system_pods.go:61] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:22.497130  299523 system_pods.go:74] duration metric: took 5.849104ms to wait for pod list to return data ...
	I1123 08:44:22.497140  299523 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:22.499755  299523 default_sa.go:45] found service account: "default"
	I1123 08:44:22.499774  299523 default_sa.go:55] duration metric: took 2.624023ms for default service account to be created ...
	I1123 08:44:22.499783  299523 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:44:22.502854  299523 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:22.502878  299523 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:22.502883  299523 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running
	I1123 08:44:22.502889  299523 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running
	I1123 08:44:22.502903  299523 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running
	I1123 08:44:22.502911  299523 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running
	I1123 08:44:22.502914  299523 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running
	I1123 08:44:22.502918  299523 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running
	I1123 08:44:22.502922  299523 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:22.502947  299523 retry.go:31] will retry after 212.635743ms: missing components: kube-dns
	I1123 08:44:22.720827  299523 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:22.720860  299523 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running
	I1123 08:44:22.720868  299523 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running
	I1123 08:44:22.720874  299523 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running
	I1123 08:44:22.720879  299523 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running
	I1123 08:44:22.720884  299523 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running
	I1123 08:44:22.720889  299523 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running
	I1123 08:44:22.720894  299523 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running
	I1123 08:44:22.720898  299523 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running
	I1123 08:44:22.720908  299523 system_pods.go:126] duration metric: took 221.118098ms to wait for k8s-apps to be running ...
	I1123 08:44:22.720921  299523 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:44:22.720967  299523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:22.737856  299523 system_svc.go:56] duration metric: took 16.926837ms WaitForService to wait for kubelet
	I1123 08:44:22.737885  299523 kubeadm.go:587] duration metric: took 13.554508173s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:22.737907  299523 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:22.741435  299523 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:22.741466  299523 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:22.741501  299523 node_conditions.go:105] duration metric: took 3.587505ms to run NodePressure ...
	I1123 08:44:22.741521  299523 start.go:242] waiting for startup goroutines ...
	I1123 08:44:22.741530  299523 start.go:247] waiting for cluster config update ...
	I1123 08:44:22.741543  299523 start.go:256] writing updated cluster config ...
	I1123 08:44:22.741835  299523 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:22.746467  299523 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:22.750370  299523 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.755106  299523 pod_ready.go:94] pod "coredns-66bc5c9577-khlrk" is "Ready"
	I1123 08:44:22.755127  299523 pod_ready.go:86] duration metric: took 4.736609ms for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.757334  299523 pod_ready.go:83] waiting for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.761272  299523 pod_ready.go:94] pod "etcd-no-preload-187607" is "Ready"
	I1123 08:44:22.761300  299523 pod_ready.go:86] duration metric: took 3.934649ms for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.763291  299523 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.767155  299523 pod_ready.go:94] pod "kube-apiserver-no-preload-187607" is "Ready"
	I1123 08:44:22.767175  299523 pod_ready.go:86] duration metric: took 3.862325ms for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.769311  299523 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.150645  299523 pod_ready.go:94] pod "kube-controller-manager-no-preload-187607" is "Ready"
	I1123 08:44:23.150674  299523 pod_ready.go:86] duration metric: took 381.341589ms for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.350884  299523 pod_ready.go:83] waiting for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.751044  299523 pod_ready.go:94] pod "kube-proxy-f9d8j" is "Ready"
	I1123 08:44:23.751078  299523 pod_ready.go:86] duration metric: took 400.167313ms for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.952910  299523 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:24.350764  299523 pod_ready.go:94] pod "kube-scheduler-no-preload-187607" is "Ready"
	I1123 08:44:24.350789  299523 pod_ready.go:86] duration metric: took 397.819843ms for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:24.350803  299523 pod_ready.go:40] duration metric: took 1.604299274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:24.397775  299523 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:44:24.399158  299523 out.go:179] * Done! kubectl is now configured to use "no-preload-187607" cluster and "default" namespace by default
	I1123 08:44:20.262746  310933 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:44:20.266869  310933 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:44:20.266886  310933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:44:20.280566  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:44:20.496233  310933 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:44:20.496351  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-653361 minikube.k8s.io/updated_at=2025_11_23T08_44_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=newest-cni-653361 minikube.k8s.io/primary=true
	I1123 08:44:20.496443  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:20.507988  310933 ops.go:34] apiserver oom_adj: -16
	I1123 08:44:20.606165  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:21.106489  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:21.606483  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:22.106819  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:22.606297  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:23.106344  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:23.606998  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:24.106482  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:24.606886  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:24.680990  310933 kubeadm.go:1114] duration metric: took 4.184613866s to wait for elevateKubeSystemPrivileges
	I1123 08:44:24.681030  310933 kubeadm.go:403] duration metric: took 14.513667228s to StartCluster
	I1123 08:44:24.681047  310933 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:24.681116  310933 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:24.682504  310933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:24.682726  310933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:44:24.682742  310933 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:44:24.682798  310933 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:24.682915  310933 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-653361"
	I1123 08:44:24.682939  310933 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-653361"
	I1123 08:44:24.682965  310933 config.go:182] Loaded profile config "newest-cni-653361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:44:24.682957  310933 addons.go:70] Setting default-storageclass=true in profile "newest-cni-653361"
	I1123 08:44:24.683026  310933 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-653361"
	I1123 08:44:24.682973  310933 host.go:66] Checking if "newest-cni-653361" exists ...
	I1123 08:44:24.683360  310933 cli_runner.go:164] Run: docker container inspect newest-cni-653361 --format={{.State.Status}}
	I1123 08:44:24.683566  310933 cli_runner.go:164] Run: docker container inspect newest-cni-653361 --format={{.State.Status}}
	I1123 08:44:24.684903  310933 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:24.686286  310933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:24.707731  310933 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:24.708852  310933 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:24.708871  310933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:24.708940  310933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653361
	I1123 08:44:24.709498  310933 addons.go:239] Setting addon default-storageclass=true in "newest-cni-653361"
	I1123 08:44:24.709538  310933 host.go:66] Checking if "newest-cni-653361" exists ...
	I1123 08:44:24.710030  310933 cli_runner.go:164] Run: docker container inspect newest-cni-653361 --format={{.State.Status}}
	I1123 08:44:24.736352  310933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/newest-cni-653361/id_rsa Username:docker}
	I1123 08:44:24.738589  310933 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:24.738609  310933 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:24.738666  310933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653361
	I1123 08:44:24.763567  310933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/newest-cni-653361/id_rsa Username:docker}
	I1123 08:44:24.781923  310933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:44:24.841189  310933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:24.875213  310933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:24.896839  310933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:24.990558  310933 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:44:24.991988  310933 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:24.992052  310933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:44:25.206427  310933 api_server.go:72] duration metric: took 523.65435ms to wait for apiserver process to appear ...
	I1123 08:44:25.206454  310933 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:44:25.206475  310933 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:44:25.213238  310933 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:44:25.214239  310933 api_server.go:141] control plane version: v1.34.1
	I1123 08:44:25.214267  310933 api_server.go:131] duration metric: took 7.804462ms to wait for apiserver health ...
	I1123 08:44:25.214277  310933 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:25.214620  310933 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:44:25.216658  310933 addons.go:530] duration metric: took 533.865585ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:44:25.217317  310933 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:25.217348  310933 system_pods.go:61] "coredns-66bc5c9577-7bttc" [db2ce82f-dd5e-452f-9b7c-4f814d6d4824] Pending
	I1123 08:44:25.217359  310933 system_pods.go:61] "etcd-newest-cni-653361" [c88c51f3-384a-4e42-a5b5-eb56b4063ca0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:44:25.217368  310933 system_pods.go:61] "kindnet-sv4xk" [bf003336-6803-41a9-aaea-9aba51c062be] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:44:25.217382  310933 system_pods.go:61] "kube-apiserver-newest-cni-653361" [555ae394-11ee-4c38-9844-0eb84e52169e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:44:25.217392  310933 system_pods.go:61] "kube-controller-manager-newest-cni-653361" [65cfedeb-a3c7-4a0c-a38f-30b249ee0c5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:44:25.217401  310933 system_pods.go:61] "kube-proxy-hwjc5" [4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:44:25.217408  310933 system_pods.go:61] "kube-scheduler-newest-cni-653361" [158da57a-3f1c-4de3-94b2-d90400674ba2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:44:25.217417  310933 system_pods.go:61] "storage-provisioner" [3d48cd45-8d74-48f3-8cab-01e61921311b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:44:25.217425  310933 system_pods.go:74] duration metric: took 3.141242ms to wait for pod list to return data ...
	I1123 08:44:25.217434  310933 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:25.219598  310933 default_sa.go:45] found service account: "default"
	I1123 08:44:25.219617  310933 default_sa.go:55] duration metric: took 2.17718ms for default service account to be created ...
	I1123 08:44:25.219630  310933 kubeadm.go:587] duration metric: took 536.861993ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:44:25.219652  310933 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:25.222457  310933 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:25.222483  310933 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:25.222500  310933 node_conditions.go:105] duration metric: took 2.842318ms to run NodePressure ...
	I1123 08:44:25.222513  310933 start.go:242] waiting for startup goroutines ...
	I1123 08:44:25.495596  310933 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-653361" context rescaled to 1 replicas
	I1123 08:44:25.495650  310933 start.go:247] waiting for cluster config update ...
	I1123 08:44:25.495666  310933 start.go:256] writing updated cluster config ...
	I1123 08:44:25.495988  310933 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:25.550187  310933 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:44:25.551644  310933 out.go:179] * Done! kubectl is now configured to use "newest-cni-653361" cluster and "default" namespace by default
	I1123 08:44:23.681150  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:44:23.681176  314636 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:44:23.681240  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:23.709889  314636 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:23.709913  314636 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:23.709973  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:23.713967  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:23.717214  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:23.743544  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:23.815302  314636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:23.828243  314636 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-057894" to be "Ready" ...
	I1123 08:44:23.839717  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:44:23.839738  314636 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:44:23.844025  314636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:23.855392  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:44:23.855415  314636 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:44:23.871166  314636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:23.871577  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:44:23.871592  314636 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:44:23.887496  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:44:23.887520  314636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:44:23.905677  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:44:23.905739  314636 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:44:23.932066  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:44:23.932089  314636 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:44:23.975917  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:44:23.975942  314636 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:44:23.992525  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:44:23.992545  314636 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:44:24.006432  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:44:24.006455  314636 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:44:24.021494  314636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:44:25.930334  314636 node_ready.go:49] node "old-k8s-version-057894" is "Ready"
	I1123 08:44:25.930364  314636 node_ready.go:38] duration metric: took 2.102095132s for node "old-k8s-version-057894" to be "Ready" ...
	I1123 08:44:25.930379  314636 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:25.930433  314636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> CRI-O <==
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.757885748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.763053965Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a9b1c01e-6a93-4662-8b86-d4d5d845619b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.76420758Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=31883ef9-29f6-4f52-9618-490aaff08894 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.766040735Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.766983813Z" level=info msg="Ran pod sandbox 866339178c3ed144071ccd534f638e603fc2c22c4bd3d45c6b959eab4da95a31 with infra container: kube-system/kindnet-sv4xk/POD" id=a9b1c01e-6a93-4662-8b86-d4d5d845619b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.767811763Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.768292446Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9696b7c2-e46e-4460-9648-87106117d8e4 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.768811086Z" level=info msg="Ran pod sandbox 437b0d89540523012374d72c7cfa9dc71b5e63463dc58341224424e27daa8eb6 with infra container: kube-system/kube-proxy-hwjc5/POD" id=31883ef9-29f6-4f52-9618-490aaff08894 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.769594285Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0035c9c6-b29c-421e-a524-c0e82cafe9ff name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.774964726Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0b4967f3-5aa5-44f0-8be2-26466044998a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.776268227Z" level=info msg="Creating container: kube-system/kindnet-sv4xk/kindnet-cni" id=c30eacfa-9500-41ae-a8ab-552e72d6f7fc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.776377084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.781429262Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9c69ce8b-47c8-4b19-8ff2-af977f9ab421 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.788877594Z" level=info msg="Creating container: kube-system/kube-proxy-hwjc5/kube-proxy" id=1adf9533-5056-4c81-9d49-92cc4575f9ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.789098056Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.79077659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.791577309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.797570025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.798328572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.831528876Z" level=info msg="Created container 3b96b6f005b722b245727e50000d267614633f8032cbabeffda72b3550a22890: kube-system/kindnet-sv4xk/kindnet-cni" id=c30eacfa-9500-41ae-a8ab-552e72d6f7fc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.832427668Z" level=info msg="Starting container: 3b96b6f005b722b245727e50000d267614633f8032cbabeffda72b3550a22890" id=485087ef-6de2-4362-b694-06d3c71ba134 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.833925577Z" level=info msg="Created container f8e91622e0835c7cbd5c2cd6d75913867afa5e14ace98e6b40321e465a870ed9: kube-system/kube-proxy-hwjc5/kube-proxy" id=1adf9533-5056-4c81-9d49-92cc4575f9ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.834398382Z" level=info msg="Starting container: f8e91622e0835c7cbd5c2cd6d75913867afa5e14ace98e6b40321e465a870ed9" id=9f2c457f-c7ed-4313-b26e-4bca8312303e name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.834645135Z" level=info msg="Started container" PID=1598 containerID=3b96b6f005b722b245727e50000d267614633f8032cbabeffda72b3550a22890 description=kube-system/kindnet-sv4xk/kindnet-cni id=485087ef-6de2-4362-b694-06d3c71ba134 name=/runtime.v1.RuntimeService/StartContainer sandboxID=866339178c3ed144071ccd534f638e603fc2c22c4bd3d45c6b959eab4da95a31
	Nov 23 08:44:25 newest-cni-653361 crio[778]: time="2025-11-23T08:44:25.837276654Z" level=info msg="Started container" PID=1597 containerID=f8e91622e0835c7cbd5c2cd6d75913867afa5e14ace98e6b40321e465a870ed9 description=kube-system/kube-proxy-hwjc5/kube-proxy id=9f2c457f-c7ed-4313-b26e-4bca8312303e name=/runtime.v1.RuntimeService/StartContainer sandboxID=437b0d89540523012374d72c7cfa9dc71b5e63463dc58341224424e27daa8eb6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f8e91622e0835       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   437b0d8954052       kube-proxy-hwjc5                            kube-system
	3b96b6f005b72       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   866339178c3ed       kindnet-sv4xk                               kube-system
	d4ab4e66de33e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   674db82bf6db4       kube-scheduler-newest-cni-653361            kube-system
	c3d8f23207db6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   4ce06077d1fe6       kube-apiserver-newest-cni-653361            kube-system
	c01c61dfba4c5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   4916672d9c05c       etcd-newest-cni-653361                      kube-system
	2598511ece3d2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   35aa3bcfebdc8       kube-controller-manager-newest-cni-653361   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-653361
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-653361
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=newest-cni-653361
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:44:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-653361
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:44:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:44:19 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:44:19 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:44:19 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 08:44:19 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-653361
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ad84826e-a86e-489e-9a4b-5295789043d1
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-653361                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-sv4xk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-653361             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-653361    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-hwjc5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-653361             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-653361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-653361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-653361 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-653361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-653361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-653361 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-653361 event: Registered Node newest-cni-653361 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [c01c61dfba4c59663a3332d613bec53c48dc6460d860d79a4c86d5bc93d3bc31] <==
	{"level":"warn","ts":"2025-11-23T08:44:16.268502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.274843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.281945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.298738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.306373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.312662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.319321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.326111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.332886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.338792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.345878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.360493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.366918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.372953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.379166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.385956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.392930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.401468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.407921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.415495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.423139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.430219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.446321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.453659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:16.524251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51548","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:44:27 up  1:26,  0 user,  load average: 5.17, 3.62, 2.27
	Linux newest-cni-653361 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3b96b6f005b722b245727e50000d267614633f8032cbabeffda72b3550a22890] <==
	I1123 08:44:26.098599       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:26.098926       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:44:26.099516       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:26.099569       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:26.099778       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:26.301661       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:26.301777       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:26.301790       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:26.301921       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:44:26.894314       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:44:26.894348       1 metrics.go:72] Registering metrics
	I1123 08:44:26.894738       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [c3d8f23207db66273ca5d47b79b3eade5ea2711786fbd196796d8370dfccd31e] <==
	E1123 08:44:17.125546       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1123 08:44:17.173385       1 controller.go:667] quota admission added evaluator for: namespaces
	E1123 08:44:17.174170       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 08:44:17.177217       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:17.177431       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:44:17.184078       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:17.184849       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:44:17.377472       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:44:17.976930       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:44:17.980600       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:44:17.980615       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:44:18.404435       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:44:18.439913       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:44:18.478797       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:44:18.483779       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 08:44:18.484571       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:44:18.487854       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:44:19.006194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:44:19.646654       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:44:19.656086       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:44:19.662123       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:44:24.358994       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:44:24.815427       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:44:25.111034       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:25.115223       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [2598511ece3d2b06167cbddb27e4f7dc5ba14d896ed01c7b1d68eb834c6f9a3c] <==
	I1123 08:44:24.006939       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:44:24.006953       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:44:24.007008       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:44:24.007133       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:44:24.007366       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:44:24.007555       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:44:24.007606       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:44:24.007720       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:44:24.007732       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:44:24.007802       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:44:24.007874       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-653361"
	I1123 08:44:24.007912       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:44:24.008160       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:44:24.009614       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:44:24.009815       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:44:24.013649       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:24.013664       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:44:24.014896       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:44:24.019078       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:44:24.023441       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:24.025615       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:44:24.029883       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:44:24.035587       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:44:24.040804       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:44:24.054089       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f8e91622e0835c7cbd5c2cd6d75913867afa5e14ace98e6b40321e465a870ed9] <==
	I1123 08:44:25.883988       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:44:25.982028       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:44:26.082905       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:44:26.082946       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 08:44:26.083060       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:44:26.108115       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:26.108181       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:44:26.114853       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:44:26.115324       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:44:26.115352       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:26.117746       1 config.go:200] "Starting service config controller"
	I1123 08:44:26.117803       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:44:26.118859       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:44:26.118897       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:44:26.131285       1 config.go:309] "Starting node config controller"
	I1123 08:44:26.133787       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:44:26.136465       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:44:26.136490       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:44:26.218752       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:44:26.234896       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:44:26.234935       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:44:26.237151       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d4ab4e66de33e62e99a82cddd0818a513d2c88626dee64d83b2236b3e1080f7f] <==
	E1123 08:44:17.045187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:44:17.045214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:44:17.045272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:44:17.045308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:44:17.045486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:44:17.045498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:44:17.045570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:44:17.045697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:44:17.045716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:44:17.045786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:44:17.045798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:44:17.045827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:44:17.045920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:44:17.045925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:44:17.045974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:44:17.854745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:44:17.937751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:44:17.955827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:44:17.986817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:44:18.007859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:44:18.009715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:44:18.040782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:44:18.123053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:44:18.211414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1123 08:44:18.539278       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:44:20 newest-cni-653361 kubelet[1304]: I1123 08:44:20.493278    1304 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-653361"
	Nov 23 08:44:20 newest-cni-653361 kubelet[1304]: E1123 08:44:20.499367    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-653361\" already exists" pod="kube-system/kube-apiserver-newest-cni-653361"
	Nov 23 08:44:20 newest-cni-653361 kubelet[1304]: E1123 08:44:20.501653    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-653361\" already exists" pod="kube-system/kube-controller-manager-newest-cni-653361"
	Nov 23 08:44:20 newest-cni-653361 kubelet[1304]: E1123 08:44:20.501735    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-653361\" already exists" pod="kube-system/etcd-newest-cni-653361"
	Nov 23 08:44:20 newest-cni-653361 kubelet[1304]: I1123 08:44:20.509466    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-653361" podStartSLOduration=1.509445327 podStartE2EDuration="1.509445327s" podCreationTimestamp="2025-11-23 08:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:20.500009929 +0000 UTC m=+1.100736609" watchObservedRunningTime="2025-11-23 08:44:20.509445327 +0000 UTC m=+1.110171998"
	Nov 23 08:44:20 newest-cni-653361 kubelet[1304]: I1123 08:44:20.509623    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-653361" podStartSLOduration=1.509611314 podStartE2EDuration="1.509611314s" podCreationTimestamp="2025-11-23 08:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:20.509570693 +0000 UTC m=+1.110297366" watchObservedRunningTime="2025-11-23 08:44:20.509611314 +0000 UTC m=+1.110337987"
	Nov 23 08:44:20 newest-cni-653361 kubelet[1304]: I1123 08:44:20.516733    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-653361" podStartSLOduration=1.516717884 podStartE2EDuration="1.516717884s" podCreationTimestamp="2025-11-23 08:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:20.516711563 +0000 UTC m=+1.117438235" watchObservedRunningTime="2025-11-23 08:44:20.516717884 +0000 UTC m=+1.117444556"
	Nov 23 08:44:20 newest-cni-653361 kubelet[1304]: I1123 08:44:20.539000    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-653361" podStartSLOduration=1.538983929 podStartE2EDuration="1.538983929s" podCreationTimestamp="2025-11-23 08:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:20.525928507 +0000 UTC m=+1.126655181" watchObservedRunningTime="2025-11-23 08:44:20.538983929 +0000 UTC m=+1.139710601"
	Nov 23 08:44:24 newest-cni-653361 kubelet[1304]: I1123 08:44:24.008088    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 08:44:24 newest-cni-653361 kubelet[1304]: I1123 08:44:24.009054    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 08:44:24 newest-cni-653361 kubelet[1304]: I1123 08:44:24.905085    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf003336-6803-41a9-aaea-9aba51c062be-xtables-lock\") pod \"kindnet-sv4xk\" (UID: \"bf003336-6803-41a9-aaea-9aba51c062be\") " pod="kube-system/kindnet-sv4xk"
	Nov 23 08:44:24 newest-cni-653361 kubelet[1304]: I1123 08:44:24.905119    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bzcc\" (UniqueName: \"kubernetes.io/projected/4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f-kube-api-access-9bzcc\") pod \"kube-proxy-hwjc5\" (UID: \"4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f\") " pod="kube-system/kube-proxy-hwjc5"
	Nov 23 08:44:24 newest-cni-653361 kubelet[1304]: I1123 08:44:24.905143    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf003336-6803-41a9-aaea-9aba51c062be-lib-modules\") pod \"kindnet-sv4xk\" (UID: \"bf003336-6803-41a9-aaea-9aba51c062be\") " pod="kube-system/kindnet-sv4xk"
	Nov 23 08:44:24 newest-cni-653361 kubelet[1304]: I1123 08:44:24.905163    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9c9t\" (UniqueName: \"kubernetes.io/projected/bf003336-6803-41a9-aaea-9aba51c062be-kube-api-access-v9c9t\") pod \"kindnet-sv4xk\" (UID: \"bf003336-6803-41a9-aaea-9aba51c062be\") " pod="kube-system/kindnet-sv4xk"
	Nov 23 08:44:24 newest-cni-653361 kubelet[1304]: I1123 08:44:24.905176    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f-xtables-lock\") pod \"kube-proxy-hwjc5\" (UID: \"4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f\") " pod="kube-system/kube-proxy-hwjc5"
	Nov 23 08:44:24 newest-cni-653361 kubelet[1304]: I1123 08:44:24.905199    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f-lib-modules\") pod \"kube-proxy-hwjc5\" (UID: \"4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f\") " pod="kube-system/kube-proxy-hwjc5"
	Nov 23 08:44:24 newest-cni-653361 kubelet[1304]: I1123 08:44:24.905285    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bf003336-6803-41a9-aaea-9aba51c062be-cni-cfg\") pod \"kindnet-sv4xk\" (UID: \"bf003336-6803-41a9-aaea-9aba51c062be\") " pod="kube-system/kindnet-sv4xk"
	Nov 23 08:44:24 newest-cni-653361 kubelet[1304]: I1123 08:44:24.905299    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f-kube-proxy\") pod \"kube-proxy-hwjc5\" (UID: \"4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f\") " pod="kube-system/kube-proxy-hwjc5"
	Nov 23 08:44:25 newest-cni-653361 kubelet[1304]: E1123 08:44:25.014868    1304 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:25 newest-cni-653361 kubelet[1304]: E1123 08:44:25.014908    1304 projected.go:196] Error preparing data for projected volume kube-api-access-9bzcc for pod kube-system/kube-proxy-hwjc5: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:25 newest-cni-653361 kubelet[1304]: E1123 08:44:25.014980    1304 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:25 newest-cni-653361 kubelet[1304]: E1123 08:44:25.015002    1304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f-kube-api-access-9bzcc podName:4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f nodeName:}" failed. No retries permitted until 2025-11-23 08:44:25.514968241 +0000 UTC m=+6.115694920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9bzcc" (UniqueName: "kubernetes.io/projected/4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f-kube-api-access-9bzcc") pod "kube-proxy-hwjc5" (UID: "4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f") : configmap "kube-root-ca.crt" not found
	Nov 23 08:44:25 newest-cni-653361 kubelet[1304]: E1123 08:44:25.015007    1304 projected.go:196] Error preparing data for projected volume kube-api-access-v9c9t for pod kube-system/kindnet-sv4xk: configmap "kube-root-ca.crt" not found
	Nov 23 08:44:25 newest-cni-653361 kubelet[1304]: E1123 08:44:25.015066    1304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf003336-6803-41a9-aaea-9aba51c062be-kube-api-access-v9c9t podName:bf003336-6803-41a9-aaea-9aba51c062be nodeName:}" failed. No retries permitted until 2025-11-23 08:44:25.515045556 +0000 UTC m=+6.115772232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v9c9t" (UniqueName: "kubernetes.io/projected/bf003336-6803-41a9-aaea-9aba51c062be-kube-api-access-v9c9t") pod "kindnet-sv4xk" (UID: "bf003336-6803-41a9-aaea-9aba51c062be") : configmap "kube-root-ca.crt" not found
	Nov 23 08:44:26 newest-cni-653361 kubelet[1304]: I1123 08:44:26.559677    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hwjc5" podStartSLOduration=2.559649543 podStartE2EDuration="2.559649543s" podCreationTimestamp="2025-11-23 08:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:26.53774819 +0000 UTC m=+7.138474870" watchObservedRunningTime="2025-11-23 08:44:26.559649543 +0000 UTC m=+7.160376226"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653361 -n newest-cni-653361
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-653361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-csqvp storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-653361 describe pod coredns-66bc5c9577-csqvp storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-653361 describe pod coredns-66bc5c9577-csqvp storage-provisioner: exit status 1 (56.091683ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-csqvp" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-653361 describe pod coredns-66bc5c9577-csqvp storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (239.262164ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:44:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-726261 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-726261 describe deploy/metrics-server -n kube-system: exit status 1 (55.793722ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-726261 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-726261
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-726261:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387",
	        "Created": "2025-11-23T08:43:38.364416328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302592,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:43:38.395238973Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/hostname",
	        "HostsPath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/hosts",
	        "LogPath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387-json.log",
	        "Name": "/default-k8s-diff-port-726261",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-726261:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-726261",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387",
	                "LowerDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-726261",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-726261/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-726261",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-726261",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-726261",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ce7c0829ee9b24b44c09f756bb50af105f52977505f6979a8f0cf4a9f751d183",
	            "SandboxKey": "/var/run/docker/netns/ce7c0829ee9b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-726261": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8e58961f30240336633bec998e074fa68c1170ebe5fe0d36562f8ff59e516d42",
	                    "EndpointID": "a614135b9def5e8001a61b9bbd269010a425059d55a68bc2509c9c1c9761fd96",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "6a:d3:ae:49:16:90",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-726261",
	                        "55c5a560eb12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-726261 logs -n 25
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-351793 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo docker system info                                                                                                                                                                                                      │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo containerd config dump                                                                                                                                                                                                  │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo crio config                                                                                                                                                                                                             │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p bridge-351793                                                                                                                                                                                                                              │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ stop    │ -p old-k8s-version-057894 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-057894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p old-k8s-version-057894 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-653361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p newest-cni-653361 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:44:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:44:16.418060  314636 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:44:16.418184  314636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:16.418195  314636 out.go:374] Setting ErrFile to fd 2...
	I1123 08:44:16.418200  314636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:16.418484  314636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:44:16.419017  314636 out.go:368] Setting JSON to false
	I1123 08:44:16.420248  314636 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5203,"bootTime":1763882253,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:44:16.420302  314636 start.go:143] virtualization: kvm guest
	I1123 08:44:16.422513  314636 out.go:179] * [old-k8s-version-057894] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:44:16.426605  314636 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:44:16.426606  314636 notify.go:221] Checking for updates...
	I1123 08:44:16.428841  314636 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:44:16.429902  314636 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:16.430819  314636 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:44:16.431702  314636 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:44:16.432602  314636 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:44:16.434097  314636 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:44:16.435753  314636 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 08:44:16.436562  314636 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:44:16.462564  314636 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:44:16.462643  314636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:16.532612  314636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:44:16.521915311 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:16.532791  314636 docker.go:319] overlay module found
	I1123 08:44:16.535057  314636 out.go:179] * Using the docker driver based on existing profile
	I1123 08:44:16.536052  314636 start.go:309] selected driver: docker
	I1123 08:44:16.536065  314636 start.go:927] validating driver "docker" against &{Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:16.536188  314636 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:44:16.536795  314636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:16.600833  314636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:44:16.59146408 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:16.601200  314636 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:16.601242  314636 cni.go:84] Creating CNI manager for ""
	I1123 08:44:16.601318  314636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:44:16.601385  314636 start.go:353] cluster config:
	{Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:16.603067  314636 out.go:179] * Starting "old-k8s-version-057894" primary control-plane node in "old-k8s-version-057894" cluster
	I1123 08:44:16.603971  314636 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:44:16.605060  314636 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:44:16.606152  314636 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:44:16.606180  314636 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1123 08:44:16.606205  314636 cache.go:65] Caching tarball of preloaded images
	I1123 08:44:16.606246  314636 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:44:16.606294  314636 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:44:16.606309  314636 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 08:44:16.606401  314636 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/config.json ...
	I1123 08:44:16.629025  314636 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:44:16.629041  314636 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:44:16.629055  314636 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:44:16.629079  314636 start.go:360] acquireMachinesLock for old-k8s-version-057894: {Name:mk24ea9464b285d5ccac107c6969c1ae844d534b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:16.629128  314636 start.go:364] duration metric: took 33.636µs to acquireMachinesLock for "old-k8s-version-057894"
	I1123 08:44:16.629143  314636 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:44:16.629151  314636 fix.go:54] fixHost starting: 
	I1123 08:44:16.629339  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:16.650710  314636 fix.go:112] recreateIfNeeded on old-k8s-version-057894: state=Stopped err=<nil>
	W1123 08:44:16.650739  314636 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 08:44:13.642139  301517 node_ready.go:57] node "default-k8s-diff-port-726261" has "Ready":"False" status (will retry)
	W1123 08:44:16.142649  301517 node_ready.go:57] node "default-k8s-diff-port-726261" has "Ready":"False" status (will retry)
	W1123 08:44:13.972407  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	W1123 08:44:15.972617  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	W1123 08:44:18.472348  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	I1123 08:44:20.247025  310933 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:44:20.247135  310933 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:44:20.247262  310933 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:44:20.247346  310933 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:44:20.247409  310933 kubeadm.go:319] OS: Linux
	I1123 08:44:20.247472  310933 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:44:20.247514  310933 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:44:20.247591  310933 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:44:20.247675  310933 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:44:20.247768  310933 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:44:20.247846  310933 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:44:20.247920  310933 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:44:20.247982  310933 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:44:20.248089  310933 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:44:20.248229  310933 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:44:20.248363  310933 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:44:20.248480  310933 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:44:20.249638  310933 out.go:252]   - Generating certificates and keys ...
	I1123 08:44:20.249750  310933 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:44:20.249829  310933 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:44:20.249910  310933 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:44:20.249991  310933 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:44:20.250044  310933 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:44:20.250090  310933 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:44:20.250160  310933 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:44:20.250299  310933 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-653361] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:44:20.250384  310933 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:44:20.250497  310933 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-653361] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:44:20.250558  310933 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:44:20.250625  310933 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:44:20.250670  310933 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:44:20.250763  310933 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:44:20.250844  310933 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:44:20.250930  310933 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:44:20.251013  310933 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:44:20.251103  310933 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:44:20.251193  310933 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:44:20.251292  310933 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:44:20.251392  310933 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:44:20.253553  310933 out.go:252]   - Booting up control plane ...
	I1123 08:44:20.253634  310933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:44:20.253732  310933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:44:20.253862  310933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:44:20.253996  310933 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:44:20.254161  310933 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:44:20.254325  310933 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:44:20.254452  310933 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:44:20.254510  310933 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:44:20.254656  310933 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:44:20.254855  310933 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:44:20.254947  310933 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.635187ms
	I1123 08:44:20.255081  310933 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:44:20.255191  310933 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1123 08:44:20.255310  310933 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:44:20.255410  310933 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:44:20.255509  310933 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.543941134s
	I1123 08:44:20.255592  310933 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.711134146s
	I1123 08:44:20.255672  310933 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.50190822s
	I1123 08:44:20.255836  310933 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:44:20.255991  310933 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:44:20.256065  310933 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:44:20.256328  310933 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-653361 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:44:20.256394  310933 kubeadm.go:319] [bootstrap-token] Using token: 0wyvo8.gmxzh0st4hzmadft
	I1123 08:44:20.258116  310933 out.go:252]   - Configuring RBAC rules ...
	I1123 08:44:20.258221  310933 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:44:20.258316  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:44:20.258491  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:44:20.258665  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:44:20.258863  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:44:20.258955  310933 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:44:20.259072  310933 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:44:20.259116  310933 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:44:20.259177  310933 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:44:20.259186  310933 kubeadm.go:319] 
	I1123 08:44:20.259252  310933 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:44:20.259265  310933 kubeadm.go:319] 
	I1123 08:44:20.259329  310933 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:44:20.259335  310933 kubeadm.go:319] 
	I1123 08:44:20.259360  310933 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:44:20.259415  310933 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:44:20.259464  310933 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:44:20.259470  310933 kubeadm.go:319] 
	I1123 08:44:20.259529  310933 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:44:20.259536  310933 kubeadm.go:319] 
	I1123 08:44:20.259575  310933 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:44:20.259581  310933 kubeadm.go:319] 
	I1123 08:44:20.259624  310933 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:44:20.259706  310933 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:44:20.259768  310933 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:44:20.259774  310933 kubeadm.go:319] 
	I1123 08:44:20.259848  310933 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:44:20.259953  310933 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:44:20.259960  310933 kubeadm.go:319] 
	I1123 08:44:20.260033  310933 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0wyvo8.gmxzh0st4hzmadft \
	I1123 08:44:20.260124  310933 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c \
	I1123 08:44:20.260143  310933 kubeadm.go:319] 	--control-plane 
	I1123 08:44:20.260152  310933 kubeadm.go:319] 
	I1123 08:44:20.260224  310933 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:44:20.260230  310933 kubeadm.go:319] 
	I1123 08:44:20.260302  310933 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0wyvo8.gmxzh0st4hzmadft \
	I1123 08:44:20.260407  310933 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c 
	I1123 08:44:20.260418  310933 cni.go:84] Creating CNI manager for ""
	I1123 08:44:20.260424  310933 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:44:20.261586  310933 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:44:16.654090  314636 out.go:252] * Restarting existing docker container for "old-k8s-version-057894" ...
	I1123 08:44:16.654186  314636 cli_runner.go:164] Run: docker start old-k8s-version-057894
	I1123 08:44:16.984977  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:17.023793  314636 kic.go:430] container "old-k8s-version-057894" state is running.
	I1123 08:44:17.024222  314636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-057894
	I1123 08:44:17.045881  314636 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/config.json ...
	I1123 08:44:17.046142  314636 machine.go:94] provisionDockerMachine start ...
	I1123 08:44:17.046245  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:17.063894  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:17.064129  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:17.064143  314636 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:44:17.064767  314636 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50416->127.0.0.1:33111: read: connection reset by peer
	I1123 08:44:20.207281  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-057894
	
	I1123 08:44:20.207320  314636 ubuntu.go:182] provisioning hostname "old-k8s-version-057894"
	I1123 08:44:20.207405  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.225411  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:20.225640  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:20.225654  314636 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-057894 && echo "old-k8s-version-057894" | sudo tee /etc/hostname
	I1123 08:44:20.384120  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-057894
	
	I1123 08:44:20.384196  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.401285  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:20.401561  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:20.401587  314636 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-057894' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-057894/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-057894' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:44:20.553936  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:44:20.553968  314636 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:44:20.554005  314636 ubuntu.go:190] setting up certificates
	I1123 08:44:20.554025  314636 provision.go:84] configureAuth start
	I1123 08:44:20.554402  314636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-057894
	I1123 08:44:20.592136  314636 provision.go:143] copyHostCerts
	I1123 08:44:20.592213  314636 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:44:20.592232  314636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:44:20.592312  314636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:44:20.592436  314636 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:44:20.592447  314636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:44:20.592484  314636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:44:20.592573  314636 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:44:20.592582  314636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:44:20.592614  314636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:44:20.592714  314636 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-057894 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-057894]
	I1123 08:44:20.652221  314636 provision.go:177] copyRemoteCerts
	I1123 08:44:20.652281  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:44:20.652322  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.672322  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:20.773680  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:44:20.790760  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 08:44:20.807788  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:44:20.824033  314636 provision.go:87] duration metric: took 269.99842ms to configureAuth
	I1123 08:44:20.824051  314636 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:44:20.824240  314636 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:44:20.824327  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.842425  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:20.842737  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:20.842764  314636 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:44:21.173321  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:44:21.173348  314636 machine.go:97] duration metric: took 4.127187999s to provisionDockerMachine
	I1123 08:44:21.173360  314636 start.go:293] postStartSetup for "old-k8s-version-057894" (driver="docker")
	I1123 08:44:21.173371  314636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:44:21.173426  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:44:21.173498  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.192289  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:21.293367  314636 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:44:21.296864  314636 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:44:21.296893  314636 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:44:21.296904  314636 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:44:21.296969  314636 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:44:21.297081  314636 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:44:21.297209  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:44:21.304802  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:44:21.321290  314636 start.go:296] duration metric: took 147.91911ms for postStartSetup
	I1123 08:44:21.321383  314636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:44:21.321433  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.339672  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:18.642288  301517 node_ready.go:49] node "default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:18.642321  301517 node_ready.go:38] duration metric: took 11.503564271s for node "default-k8s-diff-port-726261" to be "Ready" ...
	I1123 08:44:18.642339  301517 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:18.642388  301517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:44:18.658421  301517 api_server.go:72] duration metric: took 11.812908089s to wait for apiserver process to appear ...
	I1123 08:44:18.658458  301517 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:44:18.658477  301517 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:44:18.663288  301517 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:44:18.664345  301517 api_server.go:141] control plane version: v1.34.1
	I1123 08:44:18.664369  301517 api_server.go:131] duration metric: took 5.904232ms to wait for apiserver health ...
	I1123 08:44:18.664377  301517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:18.668437  301517 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:18.668483  301517 system_pods.go:61] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:18.668492  301517 system_pods.go:61] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:18.668501  301517 system_pods.go:61] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:18.668511  301517 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:18.668516  301517 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:18.668521  301517 system_pods.go:61] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:18.668529  301517 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:18.668535  301517 system_pods.go:61] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:18.668543  301517 system_pods.go:74] duration metric: took 4.160794ms to wait for pod list to return data ...
	I1123 08:44:18.668557  301517 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:18.670768  301517 default_sa.go:45] found service account: "default"
	I1123 08:44:18.670786  301517 default_sa.go:55] duration metric: took 2.223017ms for default service account to be created ...
	I1123 08:44:18.670796  301517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:44:18.673368  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:18.673401  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:18.673412  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:18.673425  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:18.673434  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:18.673449  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:18.673462  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:18.673471  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:18.673479  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:18.673510  301517 retry.go:31] will retry after 273.138898ms: missing components: kube-dns
	I1123 08:44:18.950428  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:18.950462  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:18.950468  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:18.950474  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:18.950477  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:18.950486  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:18.950492  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:18.950497  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:18.950505  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:18.950527  301517 retry.go:31] will retry after 324.368056ms: missing components: kube-dns
	I1123 08:44:19.282612  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:19.282655  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:19.282664  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:19.282681  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:19.282711  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:19.282717  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:19.282722  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:19.282728  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:19.282735  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:19.282752  301517 retry.go:31] will retry after 341.175275ms: missing components: kube-dns
	I1123 08:44:19.628067  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:19.628106  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:19.628115  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:19.628124  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:19.628131  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:19.628136  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:19.628141  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:19.628147  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:19.628151  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:19.628166  301517 retry.go:31] will retry after 385.479643ms: missing components: kube-dns
	I1123 08:44:20.019211  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:20.019262  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running
	I1123 08:44:20.019271  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:20.019278  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:20.019290  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:20.019297  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:20.019302  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:20.019307  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:20.019313  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running
	I1123 08:44:20.019328  301517 system_pods.go:126] duration metric: took 1.348525547s to wait for k8s-apps to be running ...
	I1123 08:44:20.019337  301517 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:44:20.019398  301517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:20.032534  301517 system_svc.go:56] duration metric: took 13.191771ms WaitForService to wait for kubelet
	I1123 08:44:20.032556  301517 kubeadm.go:587] duration metric: took 13.187050567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:20.032570  301517 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:20.035222  301517 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:20.035255  301517 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:20.035272  301517 node_conditions.go:105] duration metric: took 2.697218ms to run NodePressure ...
	I1123 08:44:20.035284  301517 start.go:242] waiting for startup goroutines ...
	I1123 08:44:20.035296  301517 start.go:247] waiting for cluster config update ...
	I1123 08:44:20.035308  301517 start.go:256] writing updated cluster config ...
	I1123 08:44:20.035582  301517 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:20.039148  301517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:20.042349  301517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.046265  301517 pod_ready.go:94] pod "coredns-66bc5c9577-8f8f5" is "Ready"
	I1123 08:44:20.046284  301517 pod_ready.go:86] duration metric: took 3.909737ms for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.048015  301517 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.051563  301517 pod_ready.go:94] pod "etcd-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:20.051582  301517 pod_ready.go:86] duration metric: took 3.548608ms for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.053391  301517 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.058527  301517 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:20.058551  301517 pod_ready.go:86] duration metric: took 5.13961ms for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.060160  301517 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.443432  301517 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:20.443460  301517 pod_ready.go:86] duration metric: took 383.282782ms for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.644026  301517 pod_ready.go:83] waiting for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.043432  301517 pod_ready.go:94] pod "kube-proxy-sn4sp" is "Ready"
	I1123 08:44:21.043456  301517 pod_ready.go:86] duration metric: took 399.407792ms for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.244389  301517 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.644143  301517 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:21.644175  301517 pod_ready.go:86] duration metric: took 399.759889ms for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.644190  301517 pod_ready.go:40] duration metric: took 1.605017538s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:21.697309  301517 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:44:21.699630  301517 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-726261" cluster and "default" namespace by default
	I1123 08:44:21.437237  314636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:44:21.441902  314636 fix.go:56] duration metric: took 4.812745863s for fixHost
	I1123 08:44:21.441927  314636 start.go:83] releasing machines lock for "old-k8s-version-057894", held for 4.812789083s
	I1123 08:44:21.441996  314636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-057894
	I1123 08:44:21.461031  314636 ssh_runner.go:195] Run: cat /version.json
	I1123 08:44:21.461084  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.461105  314636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:44:21.461168  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.480163  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:21.480473  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:21.634506  314636 ssh_runner.go:195] Run: systemctl --version
	I1123 08:44:21.641286  314636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:44:21.685409  314636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:44:21.690169  314636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:44:21.690228  314636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:44:21.698154  314636 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:44:21.698171  314636 start.go:496] detecting cgroup driver to use...
	I1123 08:44:21.698198  314636 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:44:21.698236  314636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:44:21.711950  314636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:44:21.726746  314636 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:44:21.726796  314636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:44:21.741579  314636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:44:21.754743  314636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:44:21.841306  314636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:44:21.930875  314636 docker.go:234] disabling docker service ...
	I1123 08:44:21.930940  314636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:44:21.944498  314636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:44:21.957091  314636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:44:22.052960  314636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:44:22.135533  314636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:44:22.147635  314636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:44:22.163753  314636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1123 08:44:22.163824  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.173900  314636 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:44:22.173957  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.184459  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.193984  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.202599  314636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:44:22.212728  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.221809  314636 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.229818  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.238209  314636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:44:22.245345  314636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:44:22.252238  314636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:22.338869  314636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:44:22.479721  314636 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:44:22.479814  314636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:44:22.483897  314636 start.go:564] Will wait 60s for crictl version
	I1123 08:44:22.483945  314636 ssh_runner.go:195] Run: which crictl
	I1123 08:44:22.487547  314636 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:44:22.519750  314636 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:44:22.519832  314636 ssh_runner.go:195] Run: crio --version
	I1123 08:44:22.551262  314636 ssh_runner.go:195] Run: crio --version
	I1123 08:44:22.580715  314636 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1123 08:44:22.581831  314636 cli_runner.go:164] Run: docker network inspect old-k8s-version-057894 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:44:22.599083  314636 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:44:22.603144  314636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:22.613848  314636 kubeadm.go:884] updating cluster {Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:44:22.613944  314636 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:44:22.613998  314636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:44:22.647518  314636 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:44:22.647542  314636 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:44:22.647616  314636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:44:22.675816  314636 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:44:22.675840  314636 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:44:22.675848  314636 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1123 08:44:22.675954  314636 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-057894 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:44:22.676050  314636 ssh_runner.go:195] Run: crio config
	I1123 08:44:22.733251  314636 cni.go:84] Creating CNI manager for ""
	I1123 08:44:22.733275  314636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:44:22.733293  314636 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:44:22.733329  314636 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-057894 NodeName:old-k8s-version-057894 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:44:22.733544  314636 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-057894"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:44:22.733619  314636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 08:44:22.744170  314636 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:44:22.744228  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:44:22.752661  314636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1123 08:44:22.768641  314636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:44:22.782019  314636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1123 08:44:22.796003  314636 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:44:22.800977  314636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:22.813321  314636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:22.903093  314636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:22.925936  314636 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894 for IP: 192.168.76.2
	I1123 08:44:22.925957  314636 certs.go:195] generating shared ca certs ...
	I1123 08:44:22.925976  314636 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:22.926151  314636 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:44:22.926214  314636 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:44:22.926226  314636 certs.go:257] generating profile certs ...
	I1123 08:44:22.926325  314636 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/client.key
	I1123 08:44:22.926393  314636 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/apiserver.key.249ce811
	I1123 08:44:22.926443  314636 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/proxy-client.key
	I1123 08:44:22.926574  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:44:22.926615  314636 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:44:22.926627  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:44:22.926663  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:44:22.926714  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:44:22.926747  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:44:22.926807  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:44:22.927577  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:44:22.946066  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:44:22.965035  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:44:22.983167  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:44:23.004136  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 08:44:23.025198  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:44:23.041558  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:44:23.058566  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:44:23.074677  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:44:23.091292  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:44:23.107997  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:44:23.125834  314636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:44:23.138442  314636 ssh_runner.go:195] Run: openssl version
	I1123 08:44:23.144241  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:44:23.152727  314636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:23.156543  314636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:23.156592  314636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:23.194469  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:44:23.202009  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:44:23.210602  314636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:44:23.214015  314636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:44:23.214065  314636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:44:23.247847  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:44:23.255072  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:44:23.263009  314636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:44:23.266387  314636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:44:23.266430  314636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:44:23.300576  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:44:23.308629  314636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:44:23.312141  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:44:23.346219  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:44:23.381481  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:44:23.417721  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:44:23.461311  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:44:23.504474  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:44:23.560218  314636 kubeadm.go:401] StartCluster: {Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:23.560327  314636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:44:23.560395  314636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:44:23.599229  314636 cri.go:89] found id: "35f8086b1de4e31006310dbc9225c47fc7ce015e3238258161e81fc2d1c7f4bd"
	I1123 08:44:23.599258  314636 cri.go:89] found id: "62bca8b239fd282ce38b86b21b9897cfdd1cd66996c68c577fb4d9a16baca0f8"
	I1123 08:44:23.599264  314636 cri.go:89] found id: "46e574a85cdd50d2ed3dfea9bf9e72260185653dd7313da97ccc3c575be7c1e6"
	I1123 08:44:23.599270  314636 cri.go:89] found id: "5ed59b21f5fe5a105c3165b1f30786d03b6ba7fda1e27532fd0541a8a4b0df67"
	I1123 08:44:23.599284  314636 cri.go:89] found id: ""
	I1123 08:44:23.599331  314636 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 08:44:23.612757  314636 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:44:23Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:44:23.612946  314636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:44:23.621797  314636 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:44:23.621814  314636 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:44:23.621861  314636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:44:23.630422  314636 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:44:23.631238  314636 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-057894" does not appear in /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:23.631790  314636 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-10964/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-057894" cluster setting kubeconfig missing "old-k8s-version-057894" context setting]
	I1123 08:44:23.632584  314636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:23.634289  314636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:44:23.643154  314636 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 08:44:23.643181  314636 kubeadm.go:602] duration metric: took 21.360308ms to restartPrimaryControlPlane
	I1123 08:44:23.643190  314636 kubeadm.go:403] duration metric: took 82.98118ms to StartCluster
	I1123 08:44:23.643205  314636 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:23.643264  314636 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:23.644605  314636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:23.644839  314636 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:44:23.644977  314636 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:23.645117  314636 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-057894"
	I1123 08:44:23.645134  314636 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-057894"
	W1123 08:44:23.645142  314636 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:44:23.645143  314636 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:44:23.645155  314636 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-057894"
	I1123 08:44:23.645176  314636 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-057894"
	I1123 08:44:23.645188  314636 host.go:66] Checking if "old-k8s-version-057894" exists ...
	I1123 08:44:23.645252  314636 addons.go:70] Setting dashboard=true in profile "old-k8s-version-057894"
	I1123 08:44:23.645268  314636 addons.go:239] Setting addon dashboard=true in "old-k8s-version-057894"
	W1123 08:44:23.645275  314636 addons.go:248] addon dashboard should already be in state true
	I1123 08:44:23.645311  314636 host.go:66] Checking if "old-k8s-version-057894" exists ...
	I1123 08:44:23.645517  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.645713  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.645745  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.649572  314636 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:23.652170  314636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:23.673583  314636 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:23.674718  314636 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:23.674737  314636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:23.674752  314636 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:44:23.674789  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:23.675471  314636 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-057894"
	W1123 08:44:23.675491  314636 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:44:23.675516  314636 host.go:66] Checking if "old-k8s-version-057894" exists ...
	I1123 08:44:23.676047  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.679811  314636 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1123 08:44:20.472384  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	I1123 08:44:22.472469  299523 node_ready.go:49] node "no-preload-187607" is "Ready"
	I1123 08:44:22.472501  299523 node_ready.go:38] duration metric: took 13.003189401s for node "no-preload-187607" to be "Ready" ...
	I1123 08:44:22.472517  299523 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:22.472570  299523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:44:22.485582  299523 api_server.go:72] duration metric: took 13.302203208s to wait for apiserver process to appear ...
	I1123 08:44:22.485608  299523 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:44:22.485625  299523 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:44:22.490169  299523 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:44:22.491237  299523 api_server.go:141] control plane version: v1.34.1
	I1123 08:44:22.491264  299523 api_server.go:131] duration metric: took 5.649677ms to wait for apiserver health ...
	I1123 08:44:22.491274  299523 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:22.496993  299523 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:22.497040  299523 system_pods.go:61] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:22.497056  299523 system_pods.go:61] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running
	I1123 08:44:22.497068  299523 system_pods.go:61] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running
	I1123 08:44:22.497075  299523 system_pods.go:61] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running
	I1123 08:44:22.497090  299523 system_pods.go:61] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running
	I1123 08:44:22.497097  299523 system_pods.go:61] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running
	I1123 08:44:22.497103  299523 system_pods.go:61] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running
	I1123 08:44:22.497119  299523 system_pods.go:61] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:22.497130  299523 system_pods.go:74] duration metric: took 5.849104ms to wait for pod list to return data ...
	I1123 08:44:22.497140  299523 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:22.499755  299523 default_sa.go:45] found service account: "default"
	I1123 08:44:22.499774  299523 default_sa.go:55] duration metric: took 2.624023ms for default service account to be created ...
	I1123 08:44:22.499783  299523 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:44:22.502854  299523 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:22.502878  299523 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:22.502883  299523 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running
	I1123 08:44:22.502889  299523 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running
	I1123 08:44:22.502903  299523 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running
	I1123 08:44:22.502911  299523 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running
	I1123 08:44:22.502914  299523 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running
	I1123 08:44:22.502918  299523 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running
	I1123 08:44:22.502922  299523 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:22.502947  299523 retry.go:31] will retry after 212.635743ms: missing components: kube-dns
	I1123 08:44:22.720827  299523 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:22.720860  299523 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running
	I1123 08:44:22.720868  299523 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running
	I1123 08:44:22.720874  299523 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running
	I1123 08:44:22.720879  299523 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running
	I1123 08:44:22.720884  299523 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running
	I1123 08:44:22.720889  299523 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running
	I1123 08:44:22.720894  299523 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running
	I1123 08:44:22.720898  299523 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running
	I1123 08:44:22.720908  299523 system_pods.go:126] duration metric: took 221.118098ms to wait for k8s-apps to be running ...
	I1123 08:44:22.720921  299523 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:44:22.720967  299523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:22.737856  299523 system_svc.go:56] duration metric: took 16.926837ms WaitForService to wait for kubelet
	I1123 08:44:22.737885  299523 kubeadm.go:587] duration metric: took 13.554508173s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:22.737907  299523 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:22.741435  299523 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:22.741466  299523 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:22.741501  299523 node_conditions.go:105] duration metric: took 3.587505ms to run NodePressure ...
	I1123 08:44:22.741521  299523 start.go:242] waiting for startup goroutines ...
	I1123 08:44:22.741530  299523 start.go:247] waiting for cluster config update ...
	I1123 08:44:22.741543  299523 start.go:256] writing updated cluster config ...
	I1123 08:44:22.741835  299523 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:22.746467  299523 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:22.750370  299523 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.755106  299523 pod_ready.go:94] pod "coredns-66bc5c9577-khlrk" is "Ready"
	I1123 08:44:22.755127  299523 pod_ready.go:86] duration metric: took 4.736609ms for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.757334  299523 pod_ready.go:83] waiting for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.761272  299523 pod_ready.go:94] pod "etcd-no-preload-187607" is "Ready"
	I1123 08:44:22.761300  299523 pod_ready.go:86] duration metric: took 3.934649ms for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.763291  299523 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.767155  299523 pod_ready.go:94] pod "kube-apiserver-no-preload-187607" is "Ready"
	I1123 08:44:22.767175  299523 pod_ready.go:86] duration metric: took 3.862325ms for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.769311  299523 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.150645  299523 pod_ready.go:94] pod "kube-controller-manager-no-preload-187607" is "Ready"
	I1123 08:44:23.150674  299523 pod_ready.go:86] duration metric: took 381.341589ms for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.350884  299523 pod_ready.go:83] waiting for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.751044  299523 pod_ready.go:94] pod "kube-proxy-f9d8j" is "Ready"
	I1123 08:44:23.751078  299523 pod_ready.go:86] duration metric: took 400.167313ms for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.952910  299523 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:24.350764  299523 pod_ready.go:94] pod "kube-scheduler-no-preload-187607" is "Ready"
	I1123 08:44:24.350789  299523 pod_ready.go:86] duration metric: took 397.819843ms for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:24.350803  299523 pod_ready.go:40] duration metric: took 1.604299274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:24.397775  299523 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:44:24.399158  299523 out.go:179] * Done! kubectl is now configured to use "no-preload-187607" cluster and "default" namespace by default
	I1123 08:44:20.262746  310933 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:44:20.266869  310933 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:44:20.266886  310933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:44:20.280566  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:44:20.496233  310933 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:44:20.496351  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-653361 minikube.k8s.io/updated_at=2025_11_23T08_44_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=newest-cni-653361 minikube.k8s.io/primary=true
	I1123 08:44:20.496443  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:20.507988  310933 ops.go:34] apiserver oom_adj: -16
	I1123 08:44:20.606165  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:21.106489  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:21.606483  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:22.106819  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:22.606297  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:23.106344  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:23.606998  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:24.106482  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:24.606886  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:24.680990  310933 kubeadm.go:1114] duration metric: took 4.184613866s to wait for elevateKubeSystemPrivileges
	I1123 08:44:24.681030  310933 kubeadm.go:403] duration metric: took 14.513667228s to StartCluster
	I1123 08:44:24.681047  310933 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:24.681116  310933 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:24.682504  310933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:24.682726  310933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:44:24.682742  310933 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:44:24.682798  310933 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:24.682915  310933 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-653361"
	I1123 08:44:24.682939  310933 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-653361"
	I1123 08:44:24.682965  310933 config.go:182] Loaded profile config "newest-cni-653361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:44:24.682957  310933 addons.go:70] Setting default-storageclass=true in profile "newest-cni-653361"
	I1123 08:44:24.683026  310933 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-653361"
	I1123 08:44:24.682973  310933 host.go:66] Checking if "newest-cni-653361" exists ...
	I1123 08:44:24.683360  310933 cli_runner.go:164] Run: docker container inspect newest-cni-653361 --format={{.State.Status}}
	I1123 08:44:24.683566  310933 cli_runner.go:164] Run: docker container inspect newest-cni-653361 --format={{.State.Status}}
	I1123 08:44:24.684903  310933 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:24.686286  310933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:24.707731  310933 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:24.708852  310933 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:24.708871  310933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:24.708940  310933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653361
	I1123 08:44:24.709498  310933 addons.go:239] Setting addon default-storageclass=true in "newest-cni-653361"
	I1123 08:44:24.709538  310933 host.go:66] Checking if "newest-cni-653361" exists ...
	I1123 08:44:24.710030  310933 cli_runner.go:164] Run: docker container inspect newest-cni-653361 --format={{.State.Status}}
	I1123 08:44:24.736352  310933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/newest-cni-653361/id_rsa Username:docker}
	I1123 08:44:24.738589  310933 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:24.738609  310933 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:24.738666  310933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653361
	I1123 08:44:24.763567  310933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/newest-cni-653361/id_rsa Username:docker}
	I1123 08:44:24.781923  310933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:44:24.841189  310933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:24.875213  310933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:24.896839  310933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:24.990558  310933 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:44:24.991988  310933 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:24.992052  310933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:44:25.206427  310933 api_server.go:72] duration metric: took 523.65435ms to wait for apiserver process to appear ...
	I1123 08:44:25.206454  310933 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:44:25.206475  310933 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:44:25.213238  310933 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:44:25.214239  310933 api_server.go:141] control plane version: v1.34.1
	I1123 08:44:25.214267  310933 api_server.go:131] duration metric: took 7.804462ms to wait for apiserver health ...
	I1123 08:44:25.214277  310933 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:25.214620  310933 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:44:25.216658  310933 addons.go:530] duration metric: took 533.865585ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:44:25.217317  310933 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:25.217348  310933 system_pods.go:61] "coredns-66bc5c9577-7bttc" [db2ce82f-dd5e-452f-9b7c-4f814d6d4824] Pending
	I1123 08:44:25.217359  310933 system_pods.go:61] "etcd-newest-cni-653361" [c88c51f3-384a-4e42-a5b5-eb56b4063ca0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:44:25.217368  310933 system_pods.go:61] "kindnet-sv4xk" [bf003336-6803-41a9-aaea-9aba51c062be] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:44:25.217382  310933 system_pods.go:61] "kube-apiserver-newest-cni-653361" [555ae394-11ee-4c38-9844-0eb84e52169e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:44:25.217392  310933 system_pods.go:61] "kube-controller-manager-newest-cni-653361" [65cfedeb-a3c7-4a0c-a38f-30b249ee0c5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:44:25.217401  310933 system_pods.go:61] "kube-proxy-hwjc5" [4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:44:25.217408  310933 system_pods.go:61] "kube-scheduler-newest-cni-653361" [158da57a-3f1c-4de3-94b2-d90400674ba2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:44:25.217417  310933 system_pods.go:61] "storage-provisioner" [3d48cd45-8d74-48f3-8cab-01e61921311b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:44:25.217425  310933 system_pods.go:74] duration metric: took 3.141242ms to wait for pod list to return data ...
	I1123 08:44:25.217434  310933 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:25.219598  310933 default_sa.go:45] found service account: "default"
	I1123 08:44:25.219617  310933 default_sa.go:55] duration metric: took 2.17718ms for default service account to be created ...
	I1123 08:44:25.219630  310933 kubeadm.go:587] duration metric: took 536.861993ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:44:25.219652  310933 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:25.222457  310933 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:25.222483  310933 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:25.222500  310933 node_conditions.go:105] duration metric: took 2.842318ms to run NodePressure ...
	I1123 08:44:25.222513  310933 start.go:242] waiting for startup goroutines ...
	I1123 08:44:25.495596  310933 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-653361" context rescaled to 1 replicas
	I1123 08:44:25.495650  310933 start.go:247] waiting for cluster config update ...
	I1123 08:44:25.495666  310933 start.go:256] writing updated cluster config ...
	I1123 08:44:25.495988  310933 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:25.550187  310933 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:44:25.551644  310933 out.go:179] * Done! kubectl is now configured to use "newest-cni-653361" cluster and "default" namespace by default
	I1123 08:44:23.681150  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:44:23.681176  314636 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:44:23.681240  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:23.709889  314636 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:23.709913  314636 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:23.709973  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:23.713967  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:23.717214  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:23.743544  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:23.815302  314636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:23.828243  314636 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-057894" to be "Ready" ...
	I1123 08:44:23.839717  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:44:23.839738  314636 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:44:23.844025  314636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:23.855392  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:44:23.855415  314636 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:44:23.871166  314636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:23.871577  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:44:23.871592  314636 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:44:23.887496  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:44:23.887520  314636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:44:23.905677  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:44:23.905739  314636 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:44:23.932066  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:44:23.932089  314636 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:44:23.975917  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:44:23.975942  314636 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:44:23.992525  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:44:23.992545  314636 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:44:24.006432  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:44:24.006455  314636 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:44:24.021494  314636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:44:25.930334  314636 node_ready.go:49] node "old-k8s-version-057894" is "Ready"
	I1123 08:44:25.930364  314636 node_ready.go:38] duration metric: took 2.102095132s for node "old-k8s-version-057894" to be "Ready" ...
	I1123 08:44:25.930379  314636 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:25.930433  314636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:44:26.809195  314636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.965139724s)
	I1123 08:44:26.809274  314636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.938087167s)
	I1123 08:44:27.189654  314636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.168118033s)
	I1123 08:44:27.189746  314636 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.259294218s)
	I1123 08:44:27.189783  314636 api_server.go:72] duration metric: took 3.544916472s to wait for apiserver process to appear ...
	I1123 08:44:27.189794  314636 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:44:27.189818  314636 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:44:27.190828  314636 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-057894 addons enable metrics-server
	
	I1123 08:44:27.192214  314636 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	
	
	==> CRI-O <==
	Nov 23 08:44:18 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:18.692086414Z" level=info msg="Starting container: 7e35afa2baad02cb8a630171fbb305d43ee176411d6be779254fe462167d7dd2" id=91ffb2c4-6b5c-415a-b6b3-b51785c4acfb name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:18 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:18.694134159Z" level=info msg="Started container" PID=1839 containerID=7e35afa2baad02cb8a630171fbb305d43ee176411d6be779254fe462167d7dd2 description=kube-system/coredns-66bc5c9577-8f8f5/coredns id=91ffb2c4-6b5c-415a-b6b3-b51785c4acfb name=/runtime.v1.RuntimeService/StartContainer sandboxID=708b3205b4f2af38733007453f9a3355fbef0cd16ef3a97efa16d31dd36df86c
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.17077837Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f0172992-10ff-4e67-bc43-fe5f6a35b652 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.170856223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.176335521Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6b67abce7416429dc679a9a3f14463b2a959cd95bd336c5e1867a9acf1284a64 UID:8d5f0f49-e259-488e-9b83-b51330a2bfdd NetNS:/var/run/netns/99476dd4-61ab-47d7-8fac-e52ec0119c88 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005200a0}] Aliases:map[]}"
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.176371783Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.185983596Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6b67abce7416429dc679a9a3f14463b2a959cd95bd336c5e1867a9acf1284a64 UID:8d5f0f49-e259-488e-9b83-b51330a2bfdd NetNS:/var/run/netns/99476dd4-61ab-47d7-8fac-e52ec0119c88 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005200a0}] Aliases:map[]}"
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.186098186Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.186752758Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.18744558Z" level=info msg="Ran pod sandbox 6b67abce7416429dc679a9a3f14463b2a959cd95bd336c5e1867a9acf1284a64 with infra container: default/busybox/POD" id=f0172992-10ff-4e67-bc43-fe5f6a35b652 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.188599616Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=05683feb-87ae-41df-8063-0be81bd07541 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.188772835Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=05683feb-87ae-41df-8063-0be81bd07541 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.188822179Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=05683feb-87ae-41df-8063-0be81bd07541 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.189603171Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e2e149ee-48f8-4663-a3a3-577d6e511c27 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.192314221Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.855196225Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=e2e149ee-48f8-4663-a3a3-577d6e511c27 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.855991105Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=29713b1c-eac9-4fb1-af73-906317c752a0 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.857433353Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8797b06-8054-480f-ac44-985ce7f0ed7c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.862477037Z" level=info msg="Creating container: default/busybox/busybox" id=6ca5fd5e-dfcb-4079-8f27-402b2ed87236 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.862600002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.866187879Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.866642453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.891591307Z" level=info msg="Created container 9be4892cf70fc032092951df0db34cd06b82faf127e606cf857339c3e2255b26: default/busybox/busybox" id=6ca5fd5e-dfcb-4079-8f27-402b2ed87236 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.892077782Z" level=info msg="Starting container: 9be4892cf70fc032092951df0db34cd06b82faf127e606cf857339c3e2255b26" id=6e2f230b-9fd3-415d-bea9-c22c2e2e1ac8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:22 default-k8s-diff-port-726261 crio[772]: time="2025-11-23T08:44:22.893642784Z" level=info msg="Started container" PID=1918 containerID=9be4892cf70fc032092951df0db34cd06b82faf127e606cf857339c3e2255b26 description=default/busybox/busybox id=6e2f230b-9fd3-415d-bea9-c22c2e2e1ac8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6b67abce7416429dc679a9a3f14463b2a959cd95bd336c5e1867a9acf1284a64
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	9be4892cf70fc       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   6b67abce74164       busybox                                                default
	7e35afa2baad0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   708b3205b4f2a       coredns-66bc5c9577-8f8f5                               kube-system
	0f982f95c2e9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   318130992d54a       storage-provisioner                                    kube-system
	e6ea5638b515e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   b978372eba6e4       kube-proxy-sn4sp                                       kube-system
	ea0449a667cc9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   f7cb05dfaf455       kindnet-4zwgv                                          kube-system
	7e37d46e9546f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   8d784c1656682       etcd-default-k8s-diff-port-726261                      kube-system
	c371e0f101314       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   7803acdd607b2       kube-scheduler-default-k8s-diff-port-726261            kube-system
	6c8018eaca50b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   75a2734f73b54       kube-apiserver-default-k8s-diff-port-726261            kube-system
	76f1634361594       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   e89f8f5906939       kube-controller-manager-default-k8s-diff-port-726261   kube-system
	
	
	==> coredns [7e35afa2baad02cb8a630171fbb305d43ee176411d6be779254fe462167d7dd2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33991 - 63171 "HINFO IN 413254303465258955.7640895456640371756. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.895218902s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-726261
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-726261
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=default-k8s-diff-port-726261
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:43:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-726261
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:44:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:44:18 +0000   Sun, 23 Nov 2025 08:43:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:44:18 +0000   Sun, 23 Nov 2025 08:43:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:44:18 +0000   Sun, 23 Nov 2025 08:43:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:44:18 +0000   Sun, 23 Nov 2025 08:44:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-726261
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                72a55ebb-5247-4a4a-aaf5-7a6c6d5788f6
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-8f8f5                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-726261                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-4zwgv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-726261             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-726261    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-sn4sp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-726261             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node default-k8s-diff-port-726261 event: Registered Node default-k8s-diff-port-726261 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-726261 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [7e37d46e9546f84a8b47a144cd956be6e258b9b64181d093a1a4505378e81eae] <==
	{"level":"warn","ts":"2025-11-23T08:43:58.186900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.193616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.201846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.209497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.216941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.224972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.236582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.242920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.250460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.258624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.265811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.273605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.286704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.293709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.301882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:43:58.361098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36338","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:44:03.667999Z","caller":"traceutil/trace.go:172","msg":"trace[107936421] transaction","detail":"{read_only:false; response_revision:281; number_of_response:1; }","duration":"116.003085ms","start":"2025-11-23T08:44:03.551971Z","end":"2025-11-23T08:44:03.667974Z","steps":["trace[107936421] 'process raft request'  (duration: 115.894986ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:44:03.837427Z","caller":"traceutil/trace.go:172","msg":"trace[401887154] transaction","detail":"{read_only:false; response_revision:282; number_of_response:1; }","duration":"135.829415ms","start":"2025-11-23T08:44:03.701575Z","end":"2025-11-23T08:44:03.837404Z","steps":["trace[401887154] 'process raft request'  (duration: 123.391199ms)","trace[401887154] 'compare'  (duration: 12.273129ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:44:04.360296Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.35973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-11-23T08:44:04.360378Z","caller":"traceutil/trace.go:172","msg":"trace[6610281] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:284; }","duration":"109.474764ms","start":"2025-11-23T08:44:04.250887Z","end":"2025-11-23T08:44:04.360362Z","steps":["trace[6610281] 'range keys from in-memory index tree'  (duration: 109.231789ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:44:04.747040Z","caller":"traceutil/trace.go:172","msg":"trace[399751698] transaction","detail":"{read_only:false; response_revision:287; number_of_response:1; }","duration":"145.907297ms","start":"2025-11-23T08:44:04.601111Z","end":"2025-11-23T08:44:04.747018Z","steps":["trace[399751698] 'process raft request'  (duration: 145.790016ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:44:05.444359Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"243.164042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/persistent-volume-binder\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:44:05.444419Z","caller":"traceutil/trace.go:172","msg":"trace[1454530341] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/persistent-volume-binder; range_end:; response_count:0; response_revision:290; }","duration":"243.232459ms","start":"2025-11-23T08:44:05.201171Z","end":"2025-11-23T08:44:05.444403Z","steps":["trace[1454530341] 'range keys from in-memory index tree'  (duration: 243.0981ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:44:05.444353Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.680125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:44:05.444508Z","caller":"traceutil/trace.go:172","msg":"trace[854614824] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:290; }","duration":"110.853301ms","start":"2025-11-23T08:44:05.333643Z","end":"2025-11-23T08:44:05.444496Z","steps":["trace[854614824] 'range keys from in-memory index tree'  (duration: 110.572345ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:44:31 up  1:26,  0 user,  load average: 5.48, 3.71, 2.31
	Linux default-k8s-diff-port-726261 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ea0449a667cc9edfb34eafc7960072ad5d31804f35ce305045e13eda89819d72] <==
	I1123 08:44:07.573896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:07.574115       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:44:07.574282       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:07.574296       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:07.574318       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:07.868875       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:07.868911       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:07.868924       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:07.869026       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:44:08.169363       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:44:08.169395       1 metrics.go:72] Registering metrics
	I1123 08:44:08.169470       1 controller.go:711] "Syncing nftables rules"
	I1123 08:44:17.812896       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:44:17.812981       1 main.go:301] handling current node
	I1123 08:44:27.810852       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:44:27.810893       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6c8018eaca50bc18df3d6d5efba285b3b3703e9b820ae587c6e7e2872fcaf86d] <==
	I1123 08:43:58.959047       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 08:43:58.959380       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:43:58.960250       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:43:58.965280       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:43:58.970948       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:43:58.999062       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:43:59.155810       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:43:59.864532       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:43:59.870057       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:43:59.870157       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:44:00.433600       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:44:00.479822       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:44:00.568510       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:44:00.577373       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:44:00.578819       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:44:00.584231       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:44:00.905108       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:44:01.762609       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:44:01.770532       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:44:01.778004       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:44:06.310330       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:06.314286       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:06.756095       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:44:07.009014       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:44:29.974019       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:42488: use of closed network connection
	
	
	==> kube-controller-manager [76f1634361594bf2238eb957dd6d6249f2789cff442e4b2b0f32116c207a2b35] <==
	I1123 08:44:05.882823       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:44:05.891102       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:44:05.904175       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:44:05.904284       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:44:05.904292       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 08:44:05.905311       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:44:05.905336       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:44:05.905669       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:44:05.906434       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:44:05.908778       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:44:05.909903       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:44:05.909945       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:44:05.909967       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:44:05.912170       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:44:05.912235       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:44:05.912239       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:05.912253       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:44:05.912270       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:44:05.912523       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:05.914910       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:44:05.919048       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:44:05.927544       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:44:05.932818       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:44:05.932887       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:20.881529       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e6ea5638b515e92bfc3aa740277b2bfc9bd545b3b8c7ba0223450ff2b93a1354] <==
	I1123 08:44:07.426443       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:44:07.495350       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:44:07.596156       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:44:07.596196       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:44:07.596269       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:44:07.618021       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:07.618092       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:44:07.624254       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:44:07.624879       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:44:07.624913       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:07.629357       1 config.go:200] "Starting service config controller"
	I1123 08:44:07.629378       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:44:07.629403       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:44:07.629408       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:44:07.629452       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:44:07.629457       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:44:07.629563       1 config.go:309] "Starting node config controller"
	I1123 08:44:07.629581       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:44:07.629590       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:44:07.730448       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:44:07.730462       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:44:07.730489       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c371e0f1013149aed0675777c2e0c4c03c2d6a8e11f45d543927cd0ceda8d893] <==
	E1123 08:43:58.931180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:43:58.931220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:43:58.931673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:43:58.933322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:43:58.933390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:43:58.933415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:43:58.933497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:43:58.933483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:43:58.933576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:43:58.933758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:43:58.933762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:43:58.933833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:43:58.933890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:43:58.933957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:43:58.934002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:43:59.731971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:43:59.794497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:43:59.905276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:43:59.948213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:43:59.994999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:44:00.021634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:44:00.055405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:44:00.087751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:44:00.157266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1123 08:44:01.928450       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:44:02 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:02.656878    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-726261" podStartSLOduration=1.656855271 podStartE2EDuration="1.656855271s" podCreationTimestamp="2025-11-23 08:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:02.655743109 +0000 UTC m=+1.135767987" watchObservedRunningTime="2025-11-23 08:44:02.656855271 +0000 UTC m=+1.136880146"
	Nov 23 08:44:02 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:02.667002    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-726261" podStartSLOduration=1.666980226 podStartE2EDuration="1.666980226s" podCreationTimestamp="2025-11-23 08:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:02.666823197 +0000 UTC m=+1.146848076" watchObservedRunningTime="2025-11-23 08:44:02.666980226 +0000 UTC m=+1.147005104"
	Nov 23 08:44:02 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:02.696238    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-726261" podStartSLOduration=1.696216929 podStartE2EDuration="1.696216929s" podCreationTimestamp="2025-11-23 08:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:02.682752325 +0000 UTC m=+1.162777205" watchObservedRunningTime="2025-11-23 08:44:02.696216929 +0000 UTC m=+1.176241807"
	Nov 23 08:44:02 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:02.696360    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-726261" podStartSLOduration=1.696352474 podStartE2EDuration="1.696352474s" podCreationTimestamp="2025-11-23 08:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:02.695823819 +0000 UTC m=+1.175848704" watchObservedRunningTime="2025-11-23 08:44:02.696352474 +0000 UTC m=+1.176377353"
	Nov 23 08:44:05 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:05.954451    1328 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:44:05 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:05.955200    1328 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:44:07 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:07.133476    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f78be2d8-1fdb-429f-be98-0cc11b6b8e40-xtables-lock\") pod \"kube-proxy-sn4sp\" (UID: \"f78be2d8-1fdb-429f-be98-0cc11b6b8e40\") " pod="kube-system/kube-proxy-sn4sp"
	Nov 23 08:44:07 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:07.133540    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f78be2d8-1fdb-429f-be98-0cc11b6b8e40-lib-modules\") pod \"kube-proxy-sn4sp\" (UID: \"f78be2d8-1fdb-429f-be98-0cc11b6b8e40\") " pod="kube-system/kube-proxy-sn4sp"
	Nov 23 08:44:07 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:07.133568    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zx4l\" (UniqueName: \"kubernetes.io/projected/f78be2d8-1fdb-429f-be98-0cc11b6b8e40-kube-api-access-2zx4l\") pod \"kube-proxy-sn4sp\" (UID: \"f78be2d8-1fdb-429f-be98-0cc11b6b8e40\") " pod="kube-system/kube-proxy-sn4sp"
	Nov 23 08:44:07 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:07.133597    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f78be2d8-1fdb-429f-be98-0cc11b6b8e40-kube-proxy\") pod \"kube-proxy-sn4sp\" (UID: \"f78be2d8-1fdb-429f-be98-0cc11b6b8e40\") " pod="kube-system/kube-proxy-sn4sp"
	Nov 23 08:44:07 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:07.133622    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhw8\" (UniqueName: \"kubernetes.io/projected/9b5a136a-e2ec-4e01-b164-d48b0b01ccf3-kube-api-access-2lhw8\") pod \"kindnet-4zwgv\" (UID: \"9b5a136a-e2ec-4e01-b164-d48b0b01ccf3\") " pod="kube-system/kindnet-4zwgv"
	Nov 23 08:44:07 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:07.133648    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b5a136a-e2ec-4e01-b164-d48b0b01ccf3-cni-cfg\") pod \"kindnet-4zwgv\" (UID: \"9b5a136a-e2ec-4e01-b164-d48b0b01ccf3\") " pod="kube-system/kindnet-4zwgv"
	Nov 23 08:44:07 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:07.133832    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b5a136a-e2ec-4e01-b164-d48b0b01ccf3-xtables-lock\") pod \"kindnet-4zwgv\" (UID: \"9b5a136a-e2ec-4e01-b164-d48b0b01ccf3\") " pod="kube-system/kindnet-4zwgv"
	Nov 23 08:44:07 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:07.133908    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b5a136a-e2ec-4e01-b164-d48b0b01ccf3-lib-modules\") pod \"kindnet-4zwgv\" (UID: \"9b5a136a-e2ec-4e01-b164-d48b0b01ccf3\") " pod="kube-system/kindnet-4zwgv"
	Nov 23 08:44:07 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:07.663083    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sn4sp" podStartSLOduration=0.663061056 podStartE2EDuration="663.061056ms" podCreationTimestamp="2025-11-23 08:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:07.654102118 +0000 UTC m=+6.134126996" watchObservedRunningTime="2025-11-23 08:44:07.663061056 +0000 UTC m=+6.143085934"
	Nov 23 08:44:07 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:07.673463    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4zwgv" podStartSLOduration=0.673427308 podStartE2EDuration="673.427308ms" podCreationTimestamp="2025-11-23 08:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:07.673315521 +0000 UTC m=+6.153340399" watchObservedRunningTime="2025-11-23 08:44:07.673427308 +0000 UTC m=+6.153452186"
	Nov 23 08:44:18 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:18.306586    1328 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:44:18 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:18.413189    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/47dd6a2f-d285-4c11-9971-aba81adb5848-tmp\") pod \"storage-provisioner\" (UID: \"47dd6a2f-d285-4c11-9971-aba81adb5848\") " pod="kube-system/storage-provisioner"
	Nov 23 08:44:18 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:18.413255    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cd7n\" (UniqueName: \"kubernetes.io/projected/47dd6a2f-d285-4c11-9971-aba81adb5848-kube-api-access-2cd7n\") pod \"storage-provisioner\" (UID: \"47dd6a2f-d285-4c11-9971-aba81adb5848\") " pod="kube-system/storage-provisioner"
	Nov 23 08:44:18 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:18.413310    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2972f876-77f7-4ac2-80df-ac460f83663e-config-volume\") pod \"coredns-66bc5c9577-8f8f5\" (UID: \"2972f876-77f7-4ac2-80df-ac460f83663e\") " pod="kube-system/coredns-66bc5c9577-8f8f5"
	Nov 23 08:44:18 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:18.413334    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp4vt\" (UniqueName: \"kubernetes.io/projected/2972f876-77f7-4ac2-80df-ac460f83663e-kube-api-access-xp4vt\") pod \"coredns-66bc5c9577-8f8f5\" (UID: \"2972f876-77f7-4ac2-80df-ac460f83663e\") " pod="kube-system/coredns-66bc5c9577-8f8f5"
	Nov 23 08:44:19 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:19.681354    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.681332289 podStartE2EDuration="12.681332289s" podCreationTimestamp="2025-11-23 08:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:19.681016903 +0000 UTC m=+18.161041780" watchObservedRunningTime="2025-11-23 08:44:19.681332289 +0000 UTC m=+18.161357169"
	Nov 23 08:44:21 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:21.864616    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8f8f5" podStartSLOduration=14.864594376 podStartE2EDuration="14.864594376s" podCreationTimestamp="2025-11-23 08:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:19.689987933 +0000 UTC m=+18.170012814" watchObservedRunningTime="2025-11-23 08:44:21.864594376 +0000 UTC m=+20.344619253"
	Nov 23 08:44:21 default-k8s-diff-port-726261 kubelet[1328]: I1123 08:44:21.934728    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvqqg\" (UniqueName: \"kubernetes.io/projected/8d5f0f49-e259-488e-9b83-b51330a2bfdd-kube-api-access-jvqqg\") pod \"busybox\" (UID: \"8d5f0f49-e259-488e-9b83-b51330a2bfdd\") " pod="default/busybox"
	Nov 23 08:44:29 default-k8s-diff-port-726261 kubelet[1328]: E1123 08:44:29.974076    1328 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45470->127.0.0.1:37925: write tcp 127.0.0.1:45470->127.0.0.1:37925: write: broken pipe
	
	
	==> storage-provisioner [0f982f95c2e9a78ee166450b1fcd22cb10c37c8c7c7aea0cf1070b9171057f20] <==
	I1123 08:44:18.703019       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:44:18.711129       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:44:18.711164       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:44:18.713324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:18.717973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:44:18.718103       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:44:18.718186       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1459ed91-8156-4cda-ba23-7e39e4104244", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-726261_6b7ed660-66a3-4d20-8233-5fb9fba7bea4 became leader
	I1123 08:44:18.718266       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-726261_6b7ed660-66a3-4d20-8233-5fb9fba7bea4!
	W1123 08:44:18.720929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:18.723796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:44:18.819033       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-726261_6b7ed660-66a3-4d20-8233-5fb9fba7bea4!
	W1123 08:44:20.727006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:20.730737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:22.735366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:22.739912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:24.745078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:24.750473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:26.754328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:26.760120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:28.762796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:28.766757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:30.769592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:30.774649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-726261 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (239.22825ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:44:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-187607 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-187607 describe deploy/metrics-server -n kube-system: exit status 1 (54.462339ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-187607 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-187607
helpers_test.go:243: (dbg) docker inspect no-preload-187607:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469",
	        "Created": "2025-11-23T08:43:30.899099908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300639,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:43:31.342109933Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/hostname",
	        "HostsPath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/hosts",
	        "LogPath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469-json.log",
	        "Name": "/no-preload-187607",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-187607:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-187607",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469",
	                "LowerDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-187607",
	                "Source": "/var/lib/docker/volumes/no-preload-187607/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-187607",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-187607",
	                "name.minikube.sigs.k8s.io": "no-preload-187607",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8bc5ace0c6c3876daf30fe2df526df9896ce56dafd4d3670e95fa2727eb0adaf",
	            "SandboxKey": "/var/run/docker/netns/8bc5ace0c6c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-187607": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e4a86ee726dad104f8707d936e5a79c6311cee3cba1074fc9a2490264915ec02",
	                    "EndpointID": "d6b64872a2ec6e60956234c016132cf8ecdb811d716da3266d1f487d4de81c52",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "52:fe:3a:8c:59:8f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-187607",
	                        "c79339fc6cb1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-187607 -n no-preload-187607
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-187607 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-351793 sudo docker system info                                                                                                                                                                                                      │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │                     │
	│ ssh     │ -p bridge-351793 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo containerd config dump                                                                                                                                                                                                  │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo crio config                                                                                                                                                                                                             │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p bridge-351793                                                                                                                                                                                                                              │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ stop    │ -p old-k8s-version-057894 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-057894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p old-k8s-version-057894 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-653361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p newest-cni-653361 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-726261 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:44:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:44:16.418060  314636 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:44:16.418184  314636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:16.418195  314636 out.go:374] Setting ErrFile to fd 2...
	I1123 08:44:16.418200  314636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:16.418484  314636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:44:16.419017  314636 out.go:368] Setting JSON to false
	I1123 08:44:16.420248  314636 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5203,"bootTime":1763882253,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:44:16.420302  314636 start.go:143] virtualization: kvm guest
	I1123 08:44:16.422513  314636 out.go:179] * [old-k8s-version-057894] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:44:16.426605  314636 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:44:16.426606  314636 notify.go:221] Checking for updates...
	I1123 08:44:16.428841  314636 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:44:16.429902  314636 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:16.430819  314636 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:44:16.431702  314636 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:44:16.432602  314636 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:44:16.434097  314636 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:44:16.435753  314636 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1123 08:44:16.436562  314636 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:44:16.462564  314636 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:44:16.462643  314636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:16.532612  314636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:44:16.521915311 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:16.532791  314636 docker.go:319] overlay module found
	I1123 08:44:16.535057  314636 out.go:179] * Using the docker driver based on existing profile
	I1123 08:44:16.536052  314636 start.go:309] selected driver: docker
	I1123 08:44:16.536065  314636 start.go:927] validating driver "docker" against &{Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:16.536188  314636 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:44:16.536795  314636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:16.600833  314636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:44:16.59146408 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:16.601200  314636 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:16.601242  314636 cni.go:84] Creating CNI manager for ""
	I1123 08:44:16.601318  314636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:44:16.601385  314636 start.go:353] cluster config:
	{Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:16.603067  314636 out.go:179] * Starting "old-k8s-version-057894" primary control-plane node in "old-k8s-version-057894" cluster
	I1123 08:44:16.603971  314636 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:44:16.605060  314636 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:44:16.606152  314636 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:44:16.606180  314636 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1123 08:44:16.606205  314636 cache.go:65] Caching tarball of preloaded images
	I1123 08:44:16.606246  314636 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:44:16.606294  314636 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:44:16.606309  314636 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1123 08:44:16.606401  314636 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/config.json ...
	I1123 08:44:16.629025  314636 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:44:16.629041  314636 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:44:16.629055  314636 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:44:16.629079  314636 start.go:360] acquireMachinesLock for old-k8s-version-057894: {Name:mk24ea9464b285d5ccac107c6969c1ae844d534b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:16.629128  314636 start.go:364] duration metric: took 33.636µs to acquireMachinesLock for "old-k8s-version-057894"
	I1123 08:44:16.629143  314636 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:44:16.629151  314636 fix.go:54] fixHost starting: 
	I1123 08:44:16.629339  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:16.650710  314636 fix.go:112] recreateIfNeeded on old-k8s-version-057894: state=Stopped err=<nil>
	W1123 08:44:16.650739  314636 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 08:44:13.642139  301517 node_ready.go:57] node "default-k8s-diff-port-726261" has "Ready":"False" status (will retry)
	W1123 08:44:16.142649  301517 node_ready.go:57] node "default-k8s-diff-port-726261" has "Ready":"False" status (will retry)
	W1123 08:44:13.972407  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	W1123 08:44:15.972617  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	W1123 08:44:18.472348  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	I1123 08:44:20.247025  310933 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:44:20.247135  310933 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:44:20.247262  310933 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:44:20.247346  310933 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:44:20.247409  310933 kubeadm.go:319] OS: Linux
	I1123 08:44:20.247472  310933 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:44:20.247514  310933 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:44:20.247591  310933 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:44:20.247675  310933 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:44:20.247768  310933 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:44:20.247846  310933 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:44:20.247920  310933 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:44:20.247982  310933 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:44:20.248089  310933 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:44:20.248229  310933 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:44:20.248363  310933 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:44:20.248480  310933 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:44:20.249638  310933 out.go:252]   - Generating certificates and keys ...
	I1123 08:44:20.249750  310933 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:44:20.249829  310933 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:44:20.249910  310933 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:44:20.249991  310933 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:44:20.250044  310933 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:44:20.250090  310933 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:44:20.250160  310933 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:44:20.250299  310933 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-653361] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:44:20.250384  310933 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:44:20.250497  310933 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-653361] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:44:20.250558  310933 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:44:20.250625  310933 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:44:20.250670  310933 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:44:20.250763  310933 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:44:20.250844  310933 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:44:20.250930  310933 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:44:20.251013  310933 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:44:20.251103  310933 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:44:20.251193  310933 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:44:20.251292  310933 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:44:20.251392  310933 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:44:20.253553  310933 out.go:252]   - Booting up control plane ...
	I1123 08:44:20.253634  310933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:44:20.253732  310933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:44:20.253862  310933 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:44:20.253996  310933 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:44:20.254161  310933 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:44:20.254325  310933 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:44:20.254452  310933 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:44:20.254510  310933 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:44:20.254656  310933 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:44:20.254855  310933 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:44:20.254947  310933 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.635187ms
	I1123 08:44:20.255081  310933 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:44:20.255191  310933 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1123 08:44:20.255310  310933 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:44:20.255410  310933 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:44:20.255509  310933 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.543941134s
	I1123 08:44:20.255592  310933 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.711134146s
	I1123 08:44:20.255672  310933 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.50190822s
	I1123 08:44:20.255836  310933 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:44:20.255991  310933 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:44:20.256065  310933 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:44:20.256328  310933 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-653361 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:44:20.256394  310933 kubeadm.go:319] [bootstrap-token] Using token: 0wyvo8.gmxzh0st4hzmadft
	I1123 08:44:20.258116  310933 out.go:252]   - Configuring RBAC rules ...
	I1123 08:44:20.258221  310933 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:44:20.258316  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:44:20.258491  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:44:20.258665  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:44:20.258863  310933 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:44:20.258955  310933 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:44:20.259072  310933 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:44:20.259116  310933 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:44:20.259177  310933 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:44:20.259186  310933 kubeadm.go:319] 
	I1123 08:44:20.259252  310933 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:44:20.259265  310933 kubeadm.go:319] 
	I1123 08:44:20.259329  310933 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:44:20.259335  310933 kubeadm.go:319] 
	I1123 08:44:20.259360  310933 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:44:20.259415  310933 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:44:20.259464  310933 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:44:20.259470  310933 kubeadm.go:319] 
	I1123 08:44:20.259529  310933 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:44:20.259536  310933 kubeadm.go:319] 
	I1123 08:44:20.259575  310933 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:44:20.259581  310933 kubeadm.go:319] 
	I1123 08:44:20.259624  310933 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:44:20.259706  310933 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:44:20.259768  310933 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:44:20.259774  310933 kubeadm.go:319] 
	I1123 08:44:20.259848  310933 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:44:20.259953  310933 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:44:20.259960  310933 kubeadm.go:319] 
	I1123 08:44:20.260033  310933 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0wyvo8.gmxzh0st4hzmadft \
	I1123 08:44:20.260124  310933 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c \
	I1123 08:44:20.260143  310933 kubeadm.go:319] 	--control-plane 
	I1123 08:44:20.260152  310933 kubeadm.go:319] 
	I1123 08:44:20.260224  310933 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:44:20.260230  310933 kubeadm.go:319] 
	I1123 08:44:20.260302  310933 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0wyvo8.gmxzh0st4hzmadft \
	I1123 08:44:20.260407  310933 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c 
	I1123 08:44:20.260418  310933 cni.go:84] Creating CNI manager for ""
	I1123 08:44:20.260424  310933 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:44:20.261586  310933 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:44:16.654090  314636 out.go:252] * Restarting existing docker container for "old-k8s-version-057894" ...
	I1123 08:44:16.654186  314636 cli_runner.go:164] Run: docker start old-k8s-version-057894
	I1123 08:44:16.984977  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:17.023793  314636 kic.go:430] container "old-k8s-version-057894" state is running.
	I1123 08:44:17.024222  314636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-057894
	I1123 08:44:17.045881  314636 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/config.json ...
	I1123 08:44:17.046142  314636 machine.go:94] provisionDockerMachine start ...
	I1123 08:44:17.046245  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:17.063894  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:17.064129  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:17.064143  314636 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:44:17.064767  314636 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50416->127.0.0.1:33111: read: connection reset by peer
	I1123 08:44:20.207281  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-057894
	
	I1123 08:44:20.207320  314636 ubuntu.go:182] provisioning hostname "old-k8s-version-057894"
	I1123 08:44:20.207405  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.225411  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:20.225640  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:20.225654  314636 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-057894 && echo "old-k8s-version-057894" | sudo tee /etc/hostname
	I1123 08:44:20.384120  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-057894
	
	I1123 08:44:20.384196  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.401285  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:20.401561  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:20.401587  314636 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-057894' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-057894/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-057894' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:44:20.553936  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:44:20.553968  314636 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:44:20.554005  314636 ubuntu.go:190] setting up certificates
	I1123 08:44:20.554025  314636 provision.go:84] configureAuth start
	I1123 08:44:20.554402  314636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-057894
	I1123 08:44:20.592136  314636 provision.go:143] copyHostCerts
	I1123 08:44:20.592213  314636 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:44:20.592232  314636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:44:20.592312  314636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:44:20.592436  314636 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:44:20.592447  314636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:44:20.592484  314636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:44:20.592573  314636 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:44:20.592582  314636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:44:20.592614  314636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:44:20.592714  314636 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-057894 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-057894]
	I1123 08:44:20.652221  314636 provision.go:177] copyRemoteCerts
	I1123 08:44:20.652281  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:44:20.652322  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.672322  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:20.773680  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:44:20.790760  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 08:44:20.807788  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:44:20.824033  314636 provision.go:87] duration metric: took 269.99842ms to configureAuth
	I1123 08:44:20.824051  314636 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:44:20.824240  314636 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:44:20.824327  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:20.842425  314636 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:20.842737  314636 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1123 08:44:20.842764  314636 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:44:21.173321  314636 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:44:21.173348  314636 machine.go:97] duration metric: took 4.127187999s to provisionDockerMachine
	I1123 08:44:21.173360  314636 start.go:293] postStartSetup for "old-k8s-version-057894" (driver="docker")
	I1123 08:44:21.173371  314636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:44:21.173426  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:44:21.173498  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.192289  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:21.293367  314636 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:44:21.296864  314636 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:44:21.296893  314636 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:44:21.296904  314636 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:44:21.296969  314636 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:44:21.297081  314636 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:44:21.297209  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:44:21.304802  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:44:21.321290  314636 start.go:296] duration metric: took 147.91911ms for postStartSetup
	I1123 08:44:21.321383  314636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:44:21.321433  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.339672  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:18.642288  301517 node_ready.go:49] node "default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:18.642321  301517 node_ready.go:38] duration metric: took 11.503564271s for node "default-k8s-diff-port-726261" to be "Ready" ...
	I1123 08:44:18.642339  301517 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:18.642388  301517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:44:18.658421  301517 api_server.go:72] duration metric: took 11.812908089s to wait for apiserver process to appear ...
	I1123 08:44:18.658458  301517 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:44:18.658477  301517 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:44:18.663288  301517 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:44:18.664345  301517 api_server.go:141] control plane version: v1.34.1
	I1123 08:44:18.664369  301517 api_server.go:131] duration metric: took 5.904232ms to wait for apiserver health ...
	I1123 08:44:18.664377  301517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:18.668437  301517 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:18.668483  301517 system_pods.go:61] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:18.668492  301517 system_pods.go:61] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:18.668501  301517 system_pods.go:61] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:18.668511  301517 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:18.668516  301517 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:18.668521  301517 system_pods.go:61] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:18.668529  301517 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:18.668535  301517 system_pods.go:61] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:18.668543  301517 system_pods.go:74] duration metric: took 4.160794ms to wait for pod list to return data ...
	I1123 08:44:18.668557  301517 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:18.670768  301517 default_sa.go:45] found service account: "default"
	I1123 08:44:18.670786  301517 default_sa.go:55] duration metric: took 2.223017ms for default service account to be created ...
	I1123 08:44:18.670796  301517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:44:18.673368  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:18.673401  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:18.673412  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:18.673425  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:18.673434  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:18.673449  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:18.673462  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:18.673471  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:18.673479  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:18.673510  301517 retry.go:31] will retry after 273.138898ms: missing components: kube-dns
	I1123 08:44:18.950428  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:18.950462  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:18.950468  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:18.950474  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:18.950477  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:18.950486  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:18.950492  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:18.950497  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:18.950505  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:18.950527  301517 retry.go:31] will retry after 324.368056ms: missing components: kube-dns
	I1123 08:44:19.282612  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:19.282655  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:19.282664  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:19.282681  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:19.282711  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:19.282717  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:19.282722  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:19.282728  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:19.282735  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:19.282752  301517 retry.go:31] will retry after 341.175275ms: missing components: kube-dns
	I1123 08:44:19.628067  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:19.628106  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:19.628115  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:19.628124  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:19.628131  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:19.628136  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:19.628141  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:19.628147  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:19.628151  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:19.628166  301517 retry.go:31] will retry after 385.479643ms: missing components: kube-dns
	I1123 08:44:20.019211  301517 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:20.019262  301517 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running
	I1123 08:44:20.019271  301517 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running
	I1123 08:44:20.019278  301517 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running
	I1123 08:44:20.019290  301517 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running
	I1123 08:44:20.019297  301517 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running
	I1123 08:44:20.019302  301517 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running
	I1123 08:44:20.019307  301517 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running
	I1123 08:44:20.019313  301517 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running
	I1123 08:44:20.019328  301517 system_pods.go:126] duration metric: took 1.348525547s to wait for k8s-apps to be running ...
	I1123 08:44:20.019337  301517 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:44:20.019398  301517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:20.032534  301517 system_svc.go:56] duration metric: took 13.191771ms WaitForService to wait for kubelet
	I1123 08:44:20.032556  301517 kubeadm.go:587] duration metric: took 13.187050567s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:20.032570  301517 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:20.035222  301517 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:20.035255  301517 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:20.035272  301517 node_conditions.go:105] duration metric: took 2.697218ms to run NodePressure ...
	I1123 08:44:20.035284  301517 start.go:242] waiting for startup goroutines ...
	I1123 08:44:20.035296  301517 start.go:247] waiting for cluster config update ...
	I1123 08:44:20.035308  301517 start.go:256] writing updated cluster config ...
	I1123 08:44:20.035582  301517 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:20.039148  301517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:20.042349  301517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.046265  301517 pod_ready.go:94] pod "coredns-66bc5c9577-8f8f5" is "Ready"
	I1123 08:44:20.046284  301517 pod_ready.go:86] duration metric: took 3.909737ms for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.048015  301517 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.051563  301517 pod_ready.go:94] pod "etcd-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:20.051582  301517 pod_ready.go:86] duration metric: took 3.548608ms for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.053391  301517 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.058527  301517 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:20.058551  301517 pod_ready.go:86] duration metric: took 5.13961ms for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.060160  301517 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.443432  301517 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:20.443460  301517 pod_ready.go:86] duration metric: took 383.282782ms for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:20.644026  301517 pod_ready.go:83] waiting for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.043432  301517 pod_ready.go:94] pod "kube-proxy-sn4sp" is "Ready"
	I1123 08:44:21.043456  301517 pod_ready.go:86] duration metric: took 399.407792ms for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.244389  301517 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.644143  301517 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-726261" is "Ready"
	I1123 08:44:21.644175  301517 pod_ready.go:86] duration metric: took 399.759889ms for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:21.644190  301517 pod_ready.go:40] duration metric: took 1.605017538s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:21.697309  301517 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:44:21.699630  301517 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-726261" cluster and "default" namespace by default
	I1123 08:44:21.437237  314636 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:44:21.441902  314636 fix.go:56] duration metric: took 4.812745863s for fixHost
	I1123 08:44:21.441927  314636 start.go:83] releasing machines lock for "old-k8s-version-057894", held for 4.812789083s
	I1123 08:44:21.441996  314636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-057894
	I1123 08:44:21.461031  314636 ssh_runner.go:195] Run: cat /version.json
	I1123 08:44:21.461084  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.461105  314636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:44:21.461168  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:21.480163  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:21.480473  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:21.634506  314636 ssh_runner.go:195] Run: systemctl --version
	I1123 08:44:21.641286  314636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:44:21.685409  314636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:44:21.690169  314636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:44:21.690228  314636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:44:21.698154  314636 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:44:21.698171  314636 start.go:496] detecting cgroup driver to use...
	I1123 08:44:21.698198  314636 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:44:21.698236  314636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:44:21.711950  314636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:44:21.726746  314636 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:44:21.726796  314636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:44:21.741579  314636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:44:21.754743  314636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:44:21.841306  314636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:44:21.930875  314636 docker.go:234] disabling docker service ...
	I1123 08:44:21.930940  314636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:44:21.944498  314636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:44:21.957091  314636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:44:22.052960  314636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:44:22.135533  314636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:44:22.147635  314636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:44:22.163753  314636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1123 08:44:22.163824  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.173900  314636 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:44:22.173957  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.184459  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.193984  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.202599  314636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:44:22.212728  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.221809  314636 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.229818  314636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:22.238209  314636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:44:22.245345  314636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:44:22.252238  314636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:22.338869  314636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:44:22.479721  314636 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:44:22.479814  314636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:44:22.483897  314636 start.go:564] Will wait 60s for crictl version
	I1123 08:44:22.483945  314636 ssh_runner.go:195] Run: which crictl
	I1123 08:44:22.487547  314636 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:44:22.519750  314636 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:44:22.519832  314636 ssh_runner.go:195] Run: crio --version
	I1123 08:44:22.551262  314636 ssh_runner.go:195] Run: crio --version
	I1123 08:44:22.580715  314636 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1123 08:44:22.581831  314636 cli_runner.go:164] Run: docker network inspect old-k8s-version-057894 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:44:22.599083  314636 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:44:22.603144  314636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:22.613848  314636 kubeadm.go:884] updating cluster {Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:44:22.613944  314636 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:44:22.613998  314636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:44:22.647518  314636 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:44:22.647542  314636 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:44:22.647616  314636 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:44:22.675816  314636 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:44:22.675840  314636 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:44:22.675848  314636 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1123 08:44:22.675954  314636 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-057894 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:44:22.676050  314636 ssh_runner.go:195] Run: crio config
	I1123 08:44:22.733251  314636 cni.go:84] Creating CNI manager for ""
	I1123 08:44:22.733275  314636 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:44:22.733293  314636 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:44:22.733329  314636 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-057894 NodeName:old-k8s-version-057894 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:44:22.733544  314636 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-057894"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:44:22.733619  314636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 08:44:22.744170  314636 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:44:22.744228  314636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:44:22.752661  314636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1123 08:44:22.768641  314636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:44:22.782019  314636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1123 08:44:22.796003  314636 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:44:22.800977  314636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:44:22.813321  314636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:22.903093  314636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:22.925936  314636 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894 for IP: 192.168.76.2
	I1123 08:44:22.925957  314636 certs.go:195] generating shared ca certs ...
	I1123 08:44:22.925976  314636 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:22.926151  314636 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:44:22.926214  314636 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:44:22.926226  314636 certs.go:257] generating profile certs ...
	I1123 08:44:22.926325  314636 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/client.key
	I1123 08:44:22.926393  314636 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/apiserver.key.249ce811
	I1123 08:44:22.926443  314636 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/proxy-client.key
	I1123 08:44:22.926574  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:44:22.926615  314636 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:44:22.926627  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:44:22.926663  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:44:22.926714  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:44:22.926747  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:44:22.926807  314636 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:44:22.927577  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:44:22.946066  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:44:22.965035  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:44:22.983167  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:44:23.004136  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 08:44:23.025198  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:44:23.041558  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:44:23.058566  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/old-k8s-version-057894/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:44:23.074677  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:44:23.091292  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:44:23.107997  314636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:44:23.125834  314636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:44:23.138442  314636 ssh_runner.go:195] Run: openssl version
	I1123 08:44:23.144241  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:44:23.152727  314636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:23.156543  314636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:23.156592  314636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:44:23.194469  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:44:23.202009  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:44:23.210602  314636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:44:23.214015  314636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:44:23.214065  314636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:44:23.247847  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:44:23.255072  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:44:23.263009  314636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:44:23.266387  314636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:44:23.266430  314636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:44:23.300576  314636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:44:23.308629  314636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:44:23.312141  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:44:23.346219  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:44:23.381481  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:44:23.417721  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:44:23.461311  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:44:23.504474  314636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:44:23.560218  314636 kubeadm.go:401] StartCluster: {Name:old-k8s-version-057894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-057894 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:23.560327  314636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:44:23.560395  314636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:44:23.599229  314636 cri.go:89] found id: "35f8086b1de4e31006310dbc9225c47fc7ce015e3238258161e81fc2d1c7f4bd"
	I1123 08:44:23.599258  314636 cri.go:89] found id: "62bca8b239fd282ce38b86b21b9897cfdd1cd66996c68c577fb4d9a16baca0f8"
	I1123 08:44:23.599264  314636 cri.go:89] found id: "46e574a85cdd50d2ed3dfea9bf9e72260185653dd7313da97ccc3c575be7c1e6"
	I1123 08:44:23.599270  314636 cri.go:89] found id: "5ed59b21f5fe5a105c3165b1f30786d03b6ba7fda1e27532fd0541a8a4b0df67"
	I1123 08:44:23.599284  314636 cri.go:89] found id: ""
	I1123 08:44:23.599331  314636 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 08:44:23.612757  314636 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:44:23Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:44:23.612946  314636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:44:23.621797  314636 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:44:23.621814  314636 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:44:23.621861  314636 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:44:23.630422  314636 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:44:23.631238  314636 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-057894" does not appear in /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:23.631790  314636 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-10964/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-057894" cluster setting kubeconfig missing "old-k8s-version-057894" context setting]
	I1123 08:44:23.632584  314636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:23.634289  314636 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:44:23.643154  314636 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 08:44:23.643181  314636 kubeadm.go:602] duration metric: took 21.360308ms to restartPrimaryControlPlane
	I1123 08:44:23.643190  314636 kubeadm.go:403] duration metric: took 82.98118ms to StartCluster
	I1123 08:44:23.643205  314636 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:23.643264  314636 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:23.644605  314636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:23.644839  314636 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:44:23.644977  314636 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:23.645117  314636 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-057894"
	I1123 08:44:23.645134  314636 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-057894"
	W1123 08:44:23.645142  314636 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:44:23.645143  314636 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:44:23.645155  314636 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-057894"
	I1123 08:44:23.645176  314636 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-057894"
	I1123 08:44:23.645188  314636 host.go:66] Checking if "old-k8s-version-057894" exists ...
	I1123 08:44:23.645252  314636 addons.go:70] Setting dashboard=true in profile "old-k8s-version-057894"
	I1123 08:44:23.645268  314636 addons.go:239] Setting addon dashboard=true in "old-k8s-version-057894"
	W1123 08:44:23.645275  314636 addons.go:248] addon dashboard should already be in state true
	I1123 08:44:23.645311  314636 host.go:66] Checking if "old-k8s-version-057894" exists ...
	I1123 08:44:23.645517  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.645713  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.645745  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.649572  314636 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:23.652170  314636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:23.673583  314636 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:23.674718  314636 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:23.674737  314636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:23.674752  314636 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:44:23.674789  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:23.675471  314636 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-057894"
	W1123 08:44:23.675491  314636 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:44:23.675516  314636 host.go:66] Checking if "old-k8s-version-057894" exists ...
	I1123 08:44:23.676047  314636 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:44:23.679811  314636 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1123 08:44:20.472384  299523 node_ready.go:57] node "no-preload-187607" has "Ready":"False" status (will retry)
	I1123 08:44:22.472469  299523 node_ready.go:49] node "no-preload-187607" is "Ready"
	I1123 08:44:22.472501  299523 node_ready.go:38] duration metric: took 13.003189401s for node "no-preload-187607" to be "Ready" ...
	I1123 08:44:22.472517  299523 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:22.472570  299523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:44:22.485582  299523 api_server.go:72] duration metric: took 13.302203208s to wait for apiserver process to appear ...
	I1123 08:44:22.485608  299523 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:44:22.485625  299523 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:44:22.490169  299523 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:44:22.491237  299523 api_server.go:141] control plane version: v1.34.1
	I1123 08:44:22.491264  299523 api_server.go:131] duration metric: took 5.649677ms to wait for apiserver health ...
	I1123 08:44:22.491274  299523 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:22.496993  299523 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:22.497040  299523 system_pods.go:61] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:22.497056  299523 system_pods.go:61] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running
	I1123 08:44:22.497068  299523 system_pods.go:61] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running
	I1123 08:44:22.497075  299523 system_pods.go:61] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running
	I1123 08:44:22.497090  299523 system_pods.go:61] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running
	I1123 08:44:22.497097  299523 system_pods.go:61] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running
	I1123 08:44:22.497103  299523 system_pods.go:61] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running
	I1123 08:44:22.497119  299523 system_pods.go:61] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:22.497130  299523 system_pods.go:74] duration metric: took 5.849104ms to wait for pod list to return data ...
	I1123 08:44:22.497140  299523 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:22.499755  299523 default_sa.go:45] found service account: "default"
	I1123 08:44:22.499774  299523 default_sa.go:55] duration metric: took 2.624023ms for default service account to be created ...
	I1123 08:44:22.499783  299523 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:44:22.502854  299523 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:22.502878  299523 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:22.502883  299523 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running
	I1123 08:44:22.502889  299523 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running
	I1123 08:44:22.502903  299523 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running
	I1123 08:44:22.502911  299523 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running
	I1123 08:44:22.502914  299523 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running
	I1123 08:44:22.502918  299523 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running
	I1123 08:44:22.502922  299523 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:44:22.502947  299523 retry.go:31] will retry after 212.635743ms: missing components: kube-dns
	I1123 08:44:22.720827  299523 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:22.720860  299523 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running
	I1123 08:44:22.720868  299523 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running
	I1123 08:44:22.720874  299523 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running
	I1123 08:44:22.720879  299523 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running
	I1123 08:44:22.720884  299523 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running
	I1123 08:44:22.720889  299523 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running
	I1123 08:44:22.720894  299523 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running
	I1123 08:44:22.720898  299523 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running
	I1123 08:44:22.720908  299523 system_pods.go:126] duration metric: took 221.118098ms to wait for k8s-apps to be running ...
	I1123 08:44:22.720921  299523 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:44:22.720967  299523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:22.737856  299523 system_svc.go:56] duration metric: took 16.926837ms WaitForService to wait for kubelet
	I1123 08:44:22.737885  299523 kubeadm.go:587] duration metric: took 13.554508173s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:22.737907  299523 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:22.741435  299523 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:22.741466  299523 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:22.741501  299523 node_conditions.go:105] duration metric: took 3.587505ms to run NodePressure ...
	I1123 08:44:22.741521  299523 start.go:242] waiting for startup goroutines ...
	I1123 08:44:22.741530  299523 start.go:247] waiting for cluster config update ...
	I1123 08:44:22.741543  299523 start.go:256] writing updated cluster config ...
	I1123 08:44:22.741835  299523 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:22.746467  299523 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:22.750370  299523 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.755106  299523 pod_ready.go:94] pod "coredns-66bc5c9577-khlrk" is "Ready"
	I1123 08:44:22.755127  299523 pod_ready.go:86] duration metric: took 4.736609ms for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.757334  299523 pod_ready.go:83] waiting for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.761272  299523 pod_ready.go:94] pod "etcd-no-preload-187607" is "Ready"
	I1123 08:44:22.761300  299523 pod_ready.go:86] duration metric: took 3.934649ms for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.763291  299523 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.767155  299523 pod_ready.go:94] pod "kube-apiserver-no-preload-187607" is "Ready"
	I1123 08:44:22.767175  299523 pod_ready.go:86] duration metric: took 3.862325ms for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:22.769311  299523 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.150645  299523 pod_ready.go:94] pod "kube-controller-manager-no-preload-187607" is "Ready"
	I1123 08:44:23.150674  299523 pod_ready.go:86] duration metric: took 381.341589ms for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.350884  299523 pod_ready.go:83] waiting for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.751044  299523 pod_ready.go:94] pod "kube-proxy-f9d8j" is "Ready"
	I1123 08:44:23.751078  299523 pod_ready.go:86] duration metric: took 400.167313ms for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:23.952910  299523 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:24.350764  299523 pod_ready.go:94] pod "kube-scheduler-no-preload-187607" is "Ready"
	I1123 08:44:24.350789  299523 pod_ready.go:86] duration metric: took 397.819843ms for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:44:24.350803  299523 pod_ready.go:40] duration metric: took 1.604299274s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:24.397775  299523 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:44:24.399158  299523 out.go:179] * Done! kubectl is now configured to use "no-preload-187607" cluster and "default" namespace by default
	I1123 08:44:20.262746  310933 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:44:20.266869  310933 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:44:20.266886  310933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:44:20.280566  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:44:20.496233  310933 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:44:20.496351  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-653361 minikube.k8s.io/updated_at=2025_11_23T08_44_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=newest-cni-653361 minikube.k8s.io/primary=true
	I1123 08:44:20.496443  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:20.507988  310933 ops.go:34] apiserver oom_adj: -16
	I1123 08:44:20.606165  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:21.106489  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:21.606483  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:22.106819  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:22.606297  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:23.106344  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:23.606998  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:24.106482  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:24.606886  310933 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:44:24.680990  310933 kubeadm.go:1114] duration metric: took 4.184613866s to wait for elevateKubeSystemPrivileges
	I1123 08:44:24.681030  310933 kubeadm.go:403] duration metric: took 14.513667228s to StartCluster
	I1123 08:44:24.681047  310933 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:24.681116  310933 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:24.682504  310933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:44:24.682726  310933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:44:24.682742  310933 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:44:24.682798  310933 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:44:24.682915  310933 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-653361"
	I1123 08:44:24.682939  310933 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-653361"
	I1123 08:44:24.682965  310933 config.go:182] Loaded profile config "newest-cni-653361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:44:24.682957  310933 addons.go:70] Setting default-storageclass=true in profile "newest-cni-653361"
	I1123 08:44:24.683026  310933 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-653361"
	I1123 08:44:24.682973  310933 host.go:66] Checking if "newest-cni-653361" exists ...
	I1123 08:44:24.683360  310933 cli_runner.go:164] Run: docker container inspect newest-cni-653361 --format={{.State.Status}}
	I1123 08:44:24.683566  310933 cli_runner.go:164] Run: docker container inspect newest-cni-653361 --format={{.State.Status}}
	I1123 08:44:24.684903  310933 out.go:179] * Verifying Kubernetes components...
	I1123 08:44:24.686286  310933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:24.707731  310933 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:44:24.708852  310933 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:24.708871  310933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:44:24.708940  310933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653361
	I1123 08:44:24.709498  310933 addons.go:239] Setting addon default-storageclass=true in "newest-cni-653361"
	I1123 08:44:24.709538  310933 host.go:66] Checking if "newest-cni-653361" exists ...
	I1123 08:44:24.710030  310933 cli_runner.go:164] Run: docker container inspect newest-cni-653361 --format={{.State.Status}}
	I1123 08:44:24.736352  310933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/newest-cni-653361/id_rsa Username:docker}
	I1123 08:44:24.738589  310933 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:24.738609  310933 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:24.738666  310933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653361
	I1123 08:44:24.763567  310933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/newest-cni-653361/id_rsa Username:docker}
	I1123 08:44:24.781923  310933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:44:24.841189  310933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:24.875213  310933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:24.896839  310933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:24.990558  310933 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:44:24.991988  310933 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:24.992052  310933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:44:25.206427  310933 api_server.go:72] duration metric: took 523.65435ms to wait for apiserver process to appear ...
	I1123 08:44:25.206454  310933 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:44:25.206475  310933 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:44:25.213238  310933 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:44:25.214239  310933 api_server.go:141] control plane version: v1.34.1
	I1123 08:44:25.214267  310933 api_server.go:131] duration metric: took 7.804462ms to wait for apiserver health ...
	I1123 08:44:25.214277  310933 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:25.214620  310933 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 08:44:25.216658  310933 addons.go:530] duration metric: took 533.865585ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:44:25.217317  310933 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:25.217348  310933 system_pods.go:61] "coredns-66bc5c9577-7bttc" [db2ce82f-dd5e-452f-9b7c-4f814d6d4824] Pending
	I1123 08:44:25.217359  310933 system_pods.go:61] "etcd-newest-cni-653361" [c88c51f3-384a-4e42-a5b5-eb56b4063ca0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:44:25.217368  310933 system_pods.go:61] "kindnet-sv4xk" [bf003336-6803-41a9-aaea-9aba51c062be] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:44:25.217382  310933 system_pods.go:61] "kube-apiserver-newest-cni-653361" [555ae394-11ee-4c38-9844-0eb84e52169e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:44:25.217392  310933 system_pods.go:61] "kube-controller-manager-newest-cni-653361" [65cfedeb-a3c7-4a0c-a38f-30b249ee0c5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:44:25.217401  310933 system_pods.go:61] "kube-proxy-hwjc5" [4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:44:25.217408  310933 system_pods.go:61] "kube-scheduler-newest-cni-653361" [158da57a-3f1c-4de3-94b2-d90400674ba2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:44:25.217417  310933 system_pods.go:61] "storage-provisioner" [3d48cd45-8d74-48f3-8cab-01e61921311b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:44:25.217425  310933 system_pods.go:74] duration metric: took 3.141242ms to wait for pod list to return data ...
	I1123 08:44:25.217434  310933 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:25.219598  310933 default_sa.go:45] found service account: "default"
	I1123 08:44:25.219617  310933 default_sa.go:55] duration metric: took 2.17718ms for default service account to be created ...
	I1123 08:44:25.219630  310933 kubeadm.go:587] duration metric: took 536.861993ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:44:25.219652  310933 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:25.222457  310933 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:25.222483  310933 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:25.222500  310933 node_conditions.go:105] duration metric: took 2.842318ms to run NodePressure ...
	I1123 08:44:25.222513  310933 start.go:242] waiting for startup goroutines ...
	I1123 08:44:25.495596  310933 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-653361" context rescaled to 1 replicas
	I1123 08:44:25.495650  310933 start.go:247] waiting for cluster config update ...
	I1123 08:44:25.495666  310933 start.go:256] writing updated cluster config ...
	I1123 08:44:25.495988  310933 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:25.550187  310933 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:44:25.551644  310933 out.go:179] * Done! kubectl is now configured to use "newest-cni-653361" cluster and "default" namespace by default
	I1123 08:44:23.681150  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:44:23.681176  314636 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:44:23.681240  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:23.709889  314636 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:23.709913  314636 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:44:23.709973  314636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:44:23.713967  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:23.717214  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:23.743544  314636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:44:23.815302  314636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:44:23.828243  314636 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-057894" to be "Ready" ...
	I1123 08:44:23.839717  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:44:23.839738  314636 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:44:23.844025  314636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:44:23.855392  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:44:23.855415  314636 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:44:23.871166  314636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:44:23.871577  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:44:23.871592  314636 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:44:23.887496  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:44:23.887520  314636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:44:23.905677  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:44:23.905739  314636 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:44:23.932066  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:44:23.932089  314636 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:44:23.975917  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:44:23.975942  314636 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:44:23.992525  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:44:23.992545  314636 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:44:24.006432  314636 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:44:24.006455  314636 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:44:24.021494  314636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:44:25.930334  314636 node_ready.go:49] node "old-k8s-version-057894" is "Ready"
	I1123 08:44:25.930364  314636 node_ready.go:38] duration metric: took 2.102095132s for node "old-k8s-version-057894" to be "Ready" ...
	I1123 08:44:25.930379  314636 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:44:25.930433  314636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:44:26.809195  314636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.965139724s)
	I1123 08:44:26.809274  314636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.938087167s)
	I1123 08:44:27.189654  314636 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.168118033s)
	I1123 08:44:27.189746  314636 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.259294218s)
	I1123 08:44:27.189783  314636 api_server.go:72] duration metric: took 3.544916472s to wait for apiserver process to appear ...
	I1123 08:44:27.189794  314636 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:44:27.189818  314636 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 08:44:27.190828  314636 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-057894 addons enable metrics-server
	
	I1123 08:44:27.192214  314636 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1123 08:44:27.193501  314636 addons.go:530] duration metric: took 3.548528886s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1123 08:44:27.196116  314636 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 08:44:27.197327  314636 api_server.go:141] control plane version: v1.28.0
	I1123 08:44:27.197349  314636 api_server.go:131] duration metric: took 7.548026ms to wait for apiserver health ...
	I1123 08:44:27.197355  314636 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:27.201254  314636 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:27.201295  314636 system_pods.go:61] "coredns-5dd5756b68-t8zg8" [f09dcee9-59c4-42e4-b347-ad3edcaf7e99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:27.201309  314636 system_pods.go:61] "etcd-old-k8s-version-057894" [6d9f6e4a-1fda-454c-af4a-a063eaec8ff4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:44:27.201320  314636 system_pods.go:61] "kindnet-lwhjw" [23c26128-6a1c-49ce-9584-c744e1c0020f] Running
	I1123 08:44:27.201331  314636 system_pods.go:61] "kube-apiserver-old-k8s-version-057894" [01709ee1-0b4b-417e-aa41-233c3eb6c516] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:44:27.201342  314636 system_pods.go:61] "kube-controller-manager-old-k8s-version-057894" [8acaebc2-556f-4af4-b611-ea475349197c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:44:27.201354  314636 system_pods.go:61] "kube-proxy-6t2mg" [d718da2c-03e9-429b-ae93-fb6053fa65b9] Running
	I1123 08:44:27.201368  314636 system_pods.go:61] "kube-scheduler-old-k8s-version-057894" [580e3abd-6da9-4046-a64f-848ac8a47bc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:44:27.201376  314636 system_pods.go:61] "storage-provisioner" [8c02ffc7-dd73-4e75-b9c4-b386f8709f29] Running
	I1123 08:44:27.201383  314636 system_pods.go:74] duration metric: took 4.022105ms to wait for pod list to return data ...
	I1123 08:44:27.201394  314636 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:27.203274  314636 default_sa.go:45] found service account: "default"
	I1123 08:44:27.203294  314636 default_sa.go:55] duration metric: took 1.893015ms for default service account to be created ...
	I1123 08:44:27.203303  314636 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:44:27.206181  314636 system_pods.go:86] 8 kube-system pods found
	I1123 08:44:27.206210  314636 system_pods.go:89] "coredns-5dd5756b68-t8zg8" [f09dcee9-59c4-42e4-b347-ad3edcaf7e99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:44:27.206226  314636 system_pods.go:89] "etcd-old-k8s-version-057894" [6d9f6e4a-1fda-454c-af4a-a063eaec8ff4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:44:27.206236  314636 system_pods.go:89] "kindnet-lwhjw" [23c26128-6a1c-49ce-9584-c744e1c0020f] Running
	I1123 08:44:27.206245  314636 system_pods.go:89] "kube-apiserver-old-k8s-version-057894" [01709ee1-0b4b-417e-aa41-233c3eb6c516] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:44:27.206256  314636 system_pods.go:89] "kube-controller-manager-old-k8s-version-057894" [8acaebc2-556f-4af4-b611-ea475349197c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:44:27.206264  314636 system_pods.go:89] "kube-proxy-6t2mg" [d718da2c-03e9-429b-ae93-fb6053fa65b9] Running
	I1123 08:44:27.206272  314636 system_pods.go:89] "kube-scheduler-old-k8s-version-057894" [580e3abd-6da9-4046-a64f-848ac8a47bc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:44:27.206280  314636 system_pods.go:89] "storage-provisioner" [8c02ffc7-dd73-4e75-b9c4-b386f8709f29] Running
	I1123 08:44:27.206290  314636 system_pods.go:126] duration metric: took 2.980594ms to wait for k8s-apps to be running ...
	I1123 08:44:27.206301  314636 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:44:27.206346  314636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:27.219484  314636 system_svc.go:56] duration metric: took 13.178055ms WaitForService to wait for kubelet
	I1123 08:44:27.219503  314636 kubeadm.go:587] duration metric: took 3.574638912s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:27.219517  314636 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:27.221991  314636 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:27.222012  314636 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:27.222025  314636 node_conditions.go:105] duration metric: took 2.50469ms to run NodePressure ...
	I1123 08:44:27.222037  314636 start.go:242] waiting for startup goroutines ...
	I1123 08:44:27.222050  314636 start.go:247] waiting for cluster config update ...
	I1123 08:44:27.222068  314636 start.go:256] writing updated cluster config ...
	I1123 08:44:27.222315  314636 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:27.226307  314636 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:44:27.230497  314636 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-t8zg8" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:44:29.236091  314636 pod_ready.go:104] pod "coredns-5dd5756b68-t8zg8" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 23 08:44:22 no-preload-187607 crio[773]: time="2025-11-23T08:44:22.361946971Z" level=info msg="Starting container: 65eff20736430d252333dc362492c498fd86a215816d109576cdfc05053510df" id=834cafa2-e755-428c-a3eb-a29647e6edbc name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:22 no-preload-187607 crio[773]: time="2025-11-23T08:44:22.363738639Z" level=info msg="Started container" PID=2890 containerID=65eff20736430d252333dc362492c498fd86a215816d109576cdfc05053510df description=kube-system/coredns-66bc5c9577-khlrk/coredns id=834cafa2-e755-428c-a3eb-a29647e6edbc name=/runtime.v1.RuntimeService/StartContainer sandboxID=5685df4d8f9e58ebc315cd1ea39b7e26758531542ebfd6ea9f444db5f9945459
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.866912205Z" level=info msg="Running pod sandbox: default/busybox/POD" id=da453d8b-473a-4f8f-887d-f3f724a83b0d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.867412722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.875372764Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:44c1cf5ac63140168e44daa684eab7367605fa8bf3c9325acd0cc4cc8e54f5a0 UID:7ac9322d-8d47-4118-be2a-c9e6190f248c NetNS:/var/run/netns/819383b9-ab3e-4856-8b1a-c3164e87bc7a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132e48}] Aliases:map[]}"
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.875434832Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.888601874Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:44c1cf5ac63140168e44daa684eab7367605fa8bf3c9325acd0cc4cc8e54f5a0 UID:7ac9322d-8d47-4118-be2a-c9e6190f248c NetNS:/var/run/netns/819383b9-ab3e-4856-8b1a-c3164e87bc7a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132e48}] Aliases:map[]}"
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.888806341Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.890556636Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.89168281Z" level=info msg="Ran pod sandbox 44c1cf5ac63140168e44daa684eab7367605fa8bf3c9325acd0cc4cc8e54f5a0 with infra container: default/busybox/POD" id=da453d8b-473a-4f8f-887d-f3f724a83b0d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.893326471Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2583ca53-fd7c-4094-b702-6588046674d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.893842604Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2583ca53-fd7c-4094-b702-6588046674d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.89389454Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2583ca53-fd7c-4094-b702-6588046674d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.89492183Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5be99ba4-909b-49ab-9d29-ca8a6ed281b9 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:44:24 no-preload-187607 crio[773]: time="2025-11-23T08:44:24.897765889Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:44:25 no-preload-187607 crio[773]: time="2025-11-23T08:44:25.545429816Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=5be99ba4-909b-49ab-9d29-ca8a6ed281b9 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:44:25 no-preload-187607 crio[773]: time="2025-11-23T08:44:25.545953098Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1a1954b9-cfa1-4b0c-bce6-1330dea823e7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:25 no-preload-187607 crio[773]: time="2025-11-23T08:44:25.547210575Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7cfbe207-a127-4649-a57c-a9e3ac0deccc name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:25 no-preload-187607 crio[773]: time="2025-11-23T08:44:25.550518991Z" level=info msg="Creating container: default/busybox/busybox" id=4c32588c-7a19-4c51-9753-0732e4fc2df3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:25 no-preload-187607 crio[773]: time="2025-11-23T08:44:25.550672138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:25 no-preload-187607 crio[773]: time="2025-11-23T08:44:25.555090158Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:25 no-preload-187607 crio[773]: time="2025-11-23T08:44:25.555630925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:25 no-preload-187607 crio[773]: time="2025-11-23T08:44:25.601935791Z" level=info msg="Created container 05f9a9d981c177e352bca6db320bbc54500cc375e9c81746e63e3a102f67bd5b: default/busybox/busybox" id=4c32588c-7a19-4c51-9753-0732e4fc2df3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:25 no-preload-187607 crio[773]: time="2025-11-23T08:44:25.602940554Z" level=info msg="Starting container: 05f9a9d981c177e352bca6db320bbc54500cc375e9c81746e63e3a102f67bd5b" id=b73b8db0-18fd-4c27-bc3a-14c29fb7fb55 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:25 no-preload-187607 crio[773]: time="2025-11-23T08:44:25.60715337Z" level=info msg="Started container" PID=2964 containerID=05f9a9d981c177e352bca6db320bbc54500cc375e9c81746e63e3a102f67bd5b description=default/busybox/busybox id=b73b8db0-18fd-4c27-bc3a-14c29fb7fb55 name=/runtime.v1.RuntimeService/StartContainer sandboxID=44c1cf5ac63140168e44daa684eab7367605fa8bf3c9325acd0cc4cc8e54f5a0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	05f9a9d981c17       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   44c1cf5ac6314       busybox                                     default
	65eff20736430       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   5685df4d8f9e5       coredns-66bc5c9577-khlrk                    kube-system
	6a6d3176674a6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   ead57fec6fc25       storage-provisioner                         kube-system
	e3c5c64517a42       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   84433573a17d0       kindnet-67c62                               kube-system
	5bde0f2065d9b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   3dca7b1a401ba       kube-proxy-f9d8j                            kube-system
	a42cb89d5974b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   38837598e6c1e       kube-controller-manager-no-preload-187607   kube-system
	2d707dde31c50       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   e2902c3aa9ec7       kube-scheduler-no-preload-187607            kube-system
	a6787d080f0c6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   4d86318a4019b       kube-apiserver-no-preload-187607            kube-system
	0935ff89e118a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   0ebe8d6147530       etcd-no-preload-187607                      kube-system
	
	
	==> coredns [65eff20736430d252333dc362492c498fd86a215816d109576cdfc05053510df] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46650 - 5112 "HINFO IN 6182438317882422885.8755672874512328041. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.087533584s
	
	
	==> describe nodes <==
	Name:               no-preload-187607
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-187607
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=no-preload-187607
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:44:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-187607
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:44:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:44:21 +0000   Sun, 23 Nov 2025 08:43:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:44:21 +0000   Sun, 23 Nov 2025 08:43:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:44:21 +0000   Sun, 23 Nov 2025 08:43:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:44:21 +0000   Sun, 23 Nov 2025 08:44:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-187607
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                156073dd-043d-48c6-8d6c-0e5326137d17
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-khlrk                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-no-preload-187607                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-67c62                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-no-preload-187607             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-187607    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-f9d8j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-no-preload-187607             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node no-preload-187607 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node no-preload-187607 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node no-preload-187607 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node no-preload-187607 event: Registered Node no-preload-187607 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-187607 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [0935ff89e118a78f827131ecc86d5363ec9b85dc1107acdebde50c3657d2b20a] <==
	{"level":"info","ts":"2025-11-23T08:44:04.561250Z","caller":"traceutil/trace.go:172","msg":"trace[320240623] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-no-preload-187607; range_end:; response_count:1; response_revision:257; }","duration":"110.824238ms","start":"2025-11-23T08:44:04.450415Z","end":"2025-11-23T08:44:04.561240Z","steps":["trace[320240623] 'agreement among raft nodes before linearized reading'  (duration: 110.645604ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:44:04.561277Z","caller":"traceutil/trace.go:172","msg":"trace[1180522584] transaction","detail":"{read_only:false; number_of_response:0; response_revision:257; }","duration":"108.997537ms","start":"2025-11-23T08:44:04.452266Z","end":"2025-11-23T08:44:04.561264Z","steps":["trace[1180522584] 'process raft request'  (duration: 108.975613ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:44:04.561289Z","caller":"traceutil/trace.go:172","msg":"trace[152775178] transaction","detail":"{read_only:false; number_of_response:0; response_revision:257; }","duration":"109.170509ms","start":"2025-11-23T08:44:04.452104Z","end":"2025-11-23T08:44:04.561274Z","steps":["trace[152775178] 'process raft request'  (duration: 108.997083ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:44:04.561316Z","caller":"traceutil/trace.go:172","msg":"trace[2007714384] transaction","detail":"{read_only:false; number_of_response:0; response_revision:257; }","duration":"109.132023ms","start":"2025-11-23T08:44:04.452172Z","end":"2025-11-23T08:44:04.561304Z","steps":["trace[2007714384] 'process raft request'  (duration: 109.031375ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:44:04.680186Z","caller":"traceutil/trace.go:172","msg":"trace[176567845] linearizableReadLoop","detail":"{readStateIndex:267; appliedIndex:267; }","duration":"118.086968ms","start":"2025-11-23T08:44:04.562080Z","end":"2025-11-23T08:44:04.680167Z","steps":["trace[176567845] 'read index received'  (duration: 118.078777ms)","trace[176567845] 'applied index is now lower than readState.Index'  (duration: 7.036µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:44:04.681140Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.03837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-no-preload-187607\" limit:1 ","response":"range_response_count:1 size:4418"}
	{"level":"info","ts":"2025-11-23T08:44:04.681194Z","caller":"traceutil/trace.go:172","msg":"trace[1437402189] range","detail":"{range_begin:/registry/pods/kube-system/etcd-no-preload-187607; range_end:; response_count:1; response_revision:257; }","duration":"119.105713ms","start":"2025-11-23T08:44:04.562077Z","end":"2025-11-23T08:44:04.681182Z","steps":["trace[1437402189] 'agreement among raft nodes before linearized reading'  (duration: 118.16914ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:44:04.681233Z","caller":"traceutil/trace.go:172","msg":"trace[271539427] transaction","detail":"{read_only:false; response_revision:259; number_of_response:1; }","duration":"168.870063ms","start":"2025-11-23T08:44:04.512350Z","end":"2025-11-23T08:44:04.681220Z","steps":["trace[271539427] 'process raft request'  (duration: 168.822248ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:44:04.681274Z","caller":"traceutil/trace.go:172","msg":"trace[542305658] transaction","detail":"{read_only:false; response_revision:258; number_of_response:1; }","duration":"168.936193ms","start":"2025-11-23T08:44:04.512326Z","end":"2025-11-23T08:44:04.681262Z","steps":["trace[542305658] 'process raft request'  (duration: 167.884578ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:44:04.746988Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.855066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-no-preload-187607\" limit:1 ","response":"range_response_count:1 size:6290"}
	{"level":"info","ts":"2025-11-23T08:44:04.747037Z","caller":"traceutil/trace.go:172","msg":"trace[667506998] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-no-preload-187607; range_end:; response_count:1; response_revision:259; }","duration":"184.915849ms","start":"2025-11-23T08:44:04.562109Z","end":"2025-11-23T08:44:04.747025Z","steps":["trace[667506998] 'agreement among raft nodes before linearized reading'  (duration: 184.75876ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:44:04.747033Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.914341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-no-preload-187607\" limit:1 ","response":"range_response_count:1 size:3385"}
	{"level":"info","ts":"2025-11-23T08:44:04.747051Z","caller":"traceutil/trace.go:172","msg":"trace[898524952] transaction","detail":"{read_only:false; response_revision:260; number_of_response:1; }","duration":"170.9037ms","start":"2025-11-23T08:44:04.576138Z","end":"2025-11-23T08:44:04.747042Z","steps":["trace[898524952] 'process raft request'  (duration: 170.81716ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:44:04.747077Z","caller":"traceutil/trace.go:172","msg":"trace[1961584743] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-no-preload-187607; range_end:; response_count:1; response_revision:259; }","duration":"184.967719ms","start":"2025-11-23T08:44:04.562097Z","end":"2025-11-23T08:44:04.747065Z","steps":["trace[1961584743] 'agreement among raft nodes before linearized reading'  (duration: 184.831582ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:44:04.888892Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.429399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/persistent-volume-binder\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:44:04.888957Z","caller":"traceutil/trace.go:172","msg":"trace[1564630412] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/persistent-volume-binder; range_end:; response_count:0; response_revision:263; }","duration":"102.51165ms","start":"2025-11-23T08:44:04.786432Z","end":"2025-11-23T08:44:04.888944Z","steps":["trace[1564630412] 'agreement among raft nodes before linearized reading'  (duration: 97.008298ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:44:04.888964Z","caller":"traceutil/trace.go:172","msg":"trace[1390812111] transaction","detail":"{read_only:false; response_revision:264; number_of_response:1; }","duration":"110.183766ms","start":"2025-11-23T08:44:04.778768Z","end":"2025-11-23T08:44:04.888951Z","steps":["trace[1390812111] 'process raft request'  (duration: 104.711986ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:44:05.114081Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.868229ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361471429757 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/persistent-volume-binder\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/persistent-volume-binder\" value_size:129 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:44:05.114170Z","caller":"traceutil/trace.go:172","msg":"trace[1875784488] transaction","detail":"{read_only:false; response_revision:265; number_of_response:1; }","duration":"220.119442ms","start":"2025-11-23T08:44:04.894031Z","end":"2025-11-23T08:44:05.114150Z","steps":["trace[1875784488] 'process raft request'  (duration: 119.130454ms)","trace[1875784488] 'compare'  (duration: 100.731675ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:44:05.156396Z","caller":"traceutil/trace.go:172","msg":"trace[511726151] transaction","detail":"{read_only:false; response_revision:266; number_of_response:1; }","duration":"258.903128ms","start":"2025-11-23T08:44:04.897480Z","end":"2025-11-23T08:44:05.156383Z","steps":["trace[511726151] 'process raft request'  (duration: 258.799512ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:44:05.450197Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"242.903206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/kindnet\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T08:44:05.450257Z","caller":"traceutil/trace.go:172","msg":"trace[1889956623] range","detail":"{range_begin:/registry/clusterroles/kindnet; range_end:; response_count:0; response_revision:267; }","duration":"242.981275ms","start":"2025-11-23T08:44:05.207260Z","end":"2025-11-23T08:44:05.450241Z","steps":["trace[1889956623] 'agreement among raft nodes before linearized reading'  (duration: 38.581912ms)","trace[1889956623] 'range keys from in-memory index tree'  (duration: 204.292723ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:44:05.450559Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.455405ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361471429765 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-no-preload-187607\" mod_revision:260 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-187607\" value_size:7818 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-no-preload-187607\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:44:05.450703Z","caller":"traceutil/trace.go:172","msg":"trace[556370249] transaction","detail":"{read_only:false; response_revision:269; number_of_response:1; }","duration":"282.301829ms","start":"2025-11-23T08:44:05.168379Z","end":"2025-11-23T08:44:05.450681Z","steps":["trace[556370249] 'process raft request'  (duration: 282.25122ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:44:05.450741Z","caller":"traceutil/trace.go:172","msg":"trace[1219077916] transaction","detail":"{read_only:false; response_revision:268; number_of_response:1; }","duration":"284.119305ms","start":"2025-11-23T08:44:05.166604Z","end":"2025-11-23T08:44:05.450723Z","steps":["trace[1219077916] 'process raft request'  (duration: 79.312887ms)","trace[1219077916] 'compare'  (duration: 204.362922ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:44:34 up  1:27,  0 user,  load average: 5.48, 3.71, 2.31
	Linux no-preload-187607 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e3c5c64517a42feb504e93fcbd6ebeebaed6218a0a79c1683a94140a8b55ae68] <==
	I1123 08:44:11.570785       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:11.571012       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 08:44:11.571163       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:11.571180       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:11.571204       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:11.772953       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:11.772993       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:11.773009       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:11.865898       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:44:12.173161       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:44:12.173182       1 metrics.go:72] Registering metrics
	I1123 08:44:12.173227       1 controller.go:711] "Syncing nftables rules"
	I1123 08:44:21.773410       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:44:21.773528       1 main.go:301] handling current node
	I1123 08:44:31.776720       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:44:31.776750       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a6787d080f0c6c35022e5b5b81aae855fe4ea707702aa1c36d33c4a4f26fa5b4] <==
	I1123 08:44:01.295860       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 08:44:01.296048       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 08:44:01.304908       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:44:01.304929       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:01.307888       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:44:01.311404       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:01.312325       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:44:02.184042       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:44:02.187959       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:44:02.187979       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:44:02.697893       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:44:02.744406       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:44:02.797666       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:44:02.807812       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1123 08:44:02.809512       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:44:02.813630       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:44:03.205294       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:44:03.700361       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:44:03.903635       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:44:04.193952       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:44:08.958015       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:44:09.211059       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:09.216420       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:44:09.259098       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 08:44:32.641025       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:57520: use of closed network connection
	
	
	==> kube-controller-manager [a42cb89d5974b25267e93ddb6ace1acfeb35f5ba9aacfc65fdae0292bf432ec8] <==
	I1123 08:44:08.185558       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:44:08.185590       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:44:08.185611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:44:08.185670       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:44:08.191929       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:44:08.191959       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:44:08.195292       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:44:08.202759       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:08.202778       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:44:08.202785       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:44:08.203741       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:44:08.203920       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:44:08.204926       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:44:08.204964       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 08:44:08.205194       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:44:08.205760       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:44:08.211286       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:44:08.211390       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:08.211502       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:44:08.216695       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:44:08.220967       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:44:08.229184       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:44:08.231467       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:44:08.241794       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:44:23.156428       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5bde0f2065d9b0fc31230cc54b244ccbcffa983ffcd6dd805e03e3b36b438968] <==
	I1123 08:44:09.690004       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:44:09.756221       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:44:09.856366       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:44:09.856403       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 08:44:09.856497       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:44:09.874169       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:09.874221       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:44:09.879346       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:44:09.879662       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:44:09.879706       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:09.880906       1 config.go:309] "Starting node config controller"
	I1123 08:44:09.880964       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:44:09.880976       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:44:09.880940       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:44:09.880985       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:44:09.880916       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:44:09.881011       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:44:09.880930       1 config.go:200] "Starting service config controller"
	I1123 08:44:09.881025       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:44:09.981830       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:44:09.981876       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:44:09.981878       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2d707dde31c50da06c129db1bcbc973a683fa9198b3e2d7ec8d532279cf0df39] <==
	E1123 08:44:01.240199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:44:01.240206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:44:01.240202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:44:01.240218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:44:01.240315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:44:01.240347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:44:01.240367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:44:01.240474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:44:01.240577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:44:01.240590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:44:01.240704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:44:02.049121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:44:02.070399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:44:02.150208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:44:02.164551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:44:02.182897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:44:02.211962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:44:02.239564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:44:02.270763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:44:02.293863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:44:02.367421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:44:02.385635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:44:02.420660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:44:02.504996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1123 08:44:05.337760       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:44:04 no-preload-187607 kubelet[2290]: E1123 08:44:04.748485    2290 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-187607\" already exists" pod="kube-system/kube-scheduler-no-preload-187607"
	Nov 23 08:44:04 no-preload-187607 kubelet[2290]: I1123 08:44:04.773828    2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-187607" podStartSLOduration=1.773813146 podStartE2EDuration="1.773813146s" podCreationTimestamp="2025-11-23 08:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:04.773663066 +0000 UTC m=+1.417639741" watchObservedRunningTime="2025-11-23 08:44:04.773813146 +0000 UTC m=+1.417789822"
	Nov 23 08:44:05 no-preload-187607 kubelet[2290]: I1123 08:44:05.157897    2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-187607" podStartSLOduration=3.157874789 podStartE2EDuration="3.157874789s" podCreationTimestamp="2025-11-23 08:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:04.890739474 +0000 UTC m=+1.534716141" watchObservedRunningTime="2025-11-23 08:44:05.157874789 +0000 UTC m=+1.801851462"
	Nov 23 08:44:05 no-preload-187607 kubelet[2290]: I1123 08:44:05.452772    2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-187607" podStartSLOduration=2.452750237 podStartE2EDuration="2.452750237s" podCreationTimestamp="2025-11-23 08:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:05.158239422 +0000 UTC m=+1.802216093" watchObservedRunningTime="2025-11-23 08:44:05.452750237 +0000 UTC m=+2.096726910"
	Nov 23 08:44:08 no-preload-187607 kubelet[2290]: I1123 08:44:08.191781    2290 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:44:08 no-preload-187607 kubelet[2290]: I1123 08:44:08.192533    2290 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:44:09 no-preload-187607 kubelet[2290]: I1123 08:44:09.413963    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d59ac36-2289-4f2f-8c9f-110235f453ee-xtables-lock\") pod \"kube-proxy-f9d8j\" (UID: \"3d59ac36-2289-4f2f-8c9f-110235f453ee\") " pod="kube-system/kube-proxy-f9d8j"
	Nov 23 08:44:09 no-preload-187607 kubelet[2290]: I1123 08:44:09.414010    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d59ac36-2289-4f2f-8c9f-110235f453ee-lib-modules\") pod \"kube-proxy-f9d8j\" (UID: \"3d59ac36-2289-4f2f-8c9f-110235f453ee\") " pod="kube-system/kube-proxy-f9d8j"
	Nov 23 08:44:09 no-preload-187607 kubelet[2290]: I1123 08:44:09.414034    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/073134c6-398a-4c03-9c1e-4970b98909fb-cni-cfg\") pod \"kindnet-67c62\" (UID: \"073134c6-398a-4c03-9c1e-4970b98909fb\") " pod="kube-system/kindnet-67c62"
	Nov 23 08:44:09 no-preload-187607 kubelet[2290]: I1123 08:44:09.414058    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073134c6-398a-4c03-9c1e-4970b98909fb-lib-modules\") pod \"kindnet-67c62\" (UID: \"073134c6-398a-4c03-9c1e-4970b98909fb\") " pod="kube-system/kindnet-67c62"
	Nov 23 08:44:09 no-preload-187607 kubelet[2290]: I1123 08:44:09.414096    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gn6t\" (UniqueName: \"kubernetes.io/projected/3d59ac36-2289-4f2f-8c9f-110235f453ee-kube-api-access-6gn6t\") pod \"kube-proxy-f9d8j\" (UID: \"3d59ac36-2289-4f2f-8c9f-110235f453ee\") " pod="kube-system/kube-proxy-f9d8j"
	Nov 23 08:44:09 no-preload-187607 kubelet[2290]: I1123 08:44:09.414121    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d59ac36-2289-4f2f-8c9f-110235f453ee-kube-proxy\") pod \"kube-proxy-f9d8j\" (UID: \"3d59ac36-2289-4f2f-8c9f-110235f453ee\") " pod="kube-system/kube-proxy-f9d8j"
	Nov 23 08:44:09 no-preload-187607 kubelet[2290]: I1123 08:44:09.414140    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/073134c6-398a-4c03-9c1e-4970b98909fb-xtables-lock\") pod \"kindnet-67c62\" (UID: \"073134c6-398a-4c03-9c1e-4970b98909fb\") " pod="kube-system/kindnet-67c62"
	Nov 23 08:44:09 no-preload-187607 kubelet[2290]: I1123 08:44:09.414167    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mr2k\" (UniqueName: \"kubernetes.io/projected/073134c6-398a-4c03-9c1e-4970b98909fb-kube-api-access-7mr2k\") pod \"kindnet-67c62\" (UID: \"073134c6-398a-4c03-9c1e-4970b98909fb\") " pod="kube-system/kindnet-67c62"
	Nov 23 08:44:10 no-preload-187607 kubelet[2290]: I1123 08:44:10.494863    2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f9d8j" podStartSLOduration=1.494826299 podStartE2EDuration="1.494826299s" podCreationTimestamp="2025-11-23 08:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:10.476933923 +0000 UTC m=+7.120910593" watchObservedRunningTime="2025-11-23 08:44:10.494826299 +0000 UTC m=+7.138802976"
	Nov 23 08:44:11 no-preload-187607 kubelet[2290]: I1123 08:44:11.478807    2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-67c62" podStartSLOduration=0.673744346 podStartE2EDuration="2.478788913s" podCreationTimestamp="2025-11-23 08:44:09 +0000 UTC" firstStartedPulling="2025-11-23 08:44:09.600581713 +0000 UTC m=+6.244558367" lastFinishedPulling="2025-11-23 08:44:11.405626269 +0000 UTC m=+8.049602934" observedRunningTime="2025-11-23 08:44:11.478569263 +0000 UTC m=+8.122545936" watchObservedRunningTime="2025-11-23 08:44:11.478788913 +0000 UTC m=+8.122765586"
	Nov 23 08:44:21 no-preload-187607 kubelet[2290]: I1123 08:44:21.976065    2290 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:44:22 no-preload-187607 kubelet[2290]: I1123 08:44:22.102645    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a02e6fe9-9deb-4a63-b887-bd353f7c37c5-tmp\") pod \"storage-provisioner\" (UID: \"a02e6fe9-9deb-4a63-b887-bd353f7c37c5\") " pod="kube-system/storage-provisioner"
	Nov 23 08:44:22 no-preload-187607 kubelet[2290]: I1123 08:44:22.102704    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfsdv\" (UniqueName: \"kubernetes.io/projected/a02e6fe9-9deb-4a63-b887-bd353f7c37c5-kube-api-access-kfsdv\") pod \"storage-provisioner\" (UID: \"a02e6fe9-9deb-4a63-b887-bd353f7c37c5\") " pod="kube-system/storage-provisioner"
	Nov 23 08:44:22 no-preload-187607 kubelet[2290]: I1123 08:44:22.102732    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96e8ec4-1ecf-4171-b927-a3353ac88d0c-config-volume\") pod \"coredns-66bc5c9577-khlrk\" (UID: \"e96e8ec4-1ecf-4171-b927-a3353ac88d0c\") " pod="kube-system/coredns-66bc5c9577-khlrk"
	Nov 23 08:44:22 no-preload-187607 kubelet[2290]: I1123 08:44:22.102810    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m46c4\" (UniqueName: \"kubernetes.io/projected/e96e8ec4-1ecf-4171-b927-a3353ac88d0c-kube-api-access-m46c4\") pod \"coredns-66bc5c9577-khlrk\" (UID: \"e96e8ec4-1ecf-4171-b927-a3353ac88d0c\") " pod="kube-system/coredns-66bc5c9577-khlrk"
	Nov 23 08:44:22 no-preload-187607 kubelet[2290]: I1123 08:44:22.514969    2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.514947388 podStartE2EDuration="13.514947388s" podCreationTimestamp="2025-11-23 08:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:22.514620385 +0000 UTC m=+19.158597062" watchObservedRunningTime="2025-11-23 08:44:22.514947388 +0000 UTC m=+19.158924061"
	Nov 23 08:44:22 no-preload-187607 kubelet[2290]: I1123 08:44:22.515084    2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-khlrk" podStartSLOduration=13.515073389 podStartE2EDuration="13.515073389s" podCreationTimestamp="2025-11-23 08:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:22.505957574 +0000 UTC m=+19.149934246" watchObservedRunningTime="2025-11-23 08:44:22.515073389 +0000 UTC m=+19.159050066"
	Nov 23 08:44:24 no-preload-187607 kubelet[2290]: I1123 08:44:24.719183    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7hkt\" (UniqueName: \"kubernetes.io/projected/7ac9322d-8d47-4118-be2a-c9e6190f248c-kube-api-access-l7hkt\") pod \"busybox\" (UID: \"7ac9322d-8d47-4118-be2a-c9e6190f248c\") " pod="default/busybox"
	Nov 23 08:44:32 no-preload-187607 kubelet[2290]: E1123 08:44:32.640969    2290 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41574->127.0.0.1:43029: write tcp 127.0.0.1:41574->127.0.0.1:43029: write: broken pipe
	
	
	==> storage-provisioner [6a6d3176674a660428bd082c444228a778fbaa977e21141b1e8f2a39c3e4dd76] <==
	I1123 08:44:22.363340       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:44:22.372810       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:44:22.372864       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:44:22.375078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:22.380108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:44:22.380250       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:44:22.380366       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e82eb46b-b542-473b-9efe-cdbb2e96ba53", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-187607_df8bbaf4-f766-4b69-b2f7-cc5a1366a610 became leader
	I1123 08:44:22.380498       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-187607_df8bbaf4-f766-4b69-b2f7-cc5a1366a610!
	W1123 08:44:22.382293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:22.386131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:44:22.481760       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-187607_df8bbaf4-f766-4b69-b2f7-cc5a1366a610!
	W1123 08:44:24.389706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:24.393458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:26.400782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:26.406671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:28.409766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:28.413313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:30.417228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:30.421901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:32.425520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:44:32.428998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-187607 -n no-preload-187607
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-187607 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-653361 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-653361 --alsologtostderr -v=1: exit status 80 (2.421556251s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-653361 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:44:52.116384  324671 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:44:52.116466  324671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:52.116470  324671 out.go:374] Setting ErrFile to fd 2...
	I1123 08:44:52.116473  324671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:52.116706  324671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:44:52.116920  324671 out.go:368] Setting JSON to false
	I1123 08:44:52.116939  324671 mustload.go:66] Loading cluster: newest-cni-653361
	I1123 08:44:52.117247  324671 config.go:182] Loaded profile config "newest-cni-653361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:44:52.117648  324671 cli_runner.go:164] Run: docker container inspect newest-cni-653361 --format={{.State.Status}}
	I1123 08:44:52.135177  324671 host.go:66] Checking if "newest-cni-653361" exists ...
	I1123 08:44:52.135469  324671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:52.191200  324671 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:89 SystemTime:2025-11-23 08:44:52.181665883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:52.191851  324671 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-653361 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 08:44:52.193536  324671 out.go:179] * Pausing node newest-cni-653361 ... 
	I1123 08:44:52.194581  324671 host.go:66] Checking if "newest-cni-653361" exists ...
	I1123 08:44:52.194917  324671 ssh_runner.go:195] Run: systemctl --version
	I1123 08:44:52.194954  324671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-653361
	I1123 08:44:52.211861  324671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/newest-cni-653361/id_rsa Username:docker}
	I1123 08:44:52.311540  324671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:52.323068  324671 pause.go:52] kubelet running: true
	I1123 08:44:52.323121  324671 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:44:52.450528  324671 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:44:52.450600  324671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:44:52.513134  324671 cri.go:89] found id: "f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1"
	I1123 08:44:52.513163  324671 cri.go:89] found id: "135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0"
	I1123 08:44:52.513172  324671 cri.go:89] found id: "b6dfe80ea3b16b5150f5fa470618ac178300fcb9e23a1f7330b5e2eb5323283a"
	I1123 08:44:52.513177  324671 cri.go:89] found id: "5ce6e1dfcb7c884f99931e26214cf08dbcae9379b1bc1809cea34932337b31b6"
	I1123 08:44:52.513180  324671 cri.go:89] found id: "70f7a98bf3bcabe6f3dedc01cac36ba0d16d6237cea0f65d2be7fa4010cb20fd"
	I1123 08:44:52.513187  324671 cri.go:89] found id: "b7e1f0dc24243d35d45e248318e57982b8b5348e8dd327ede27a379524ecc12c"
	I1123 08:44:52.513190  324671 cri.go:89] found id: ""
	I1123 08:44:52.513233  324671 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:44:52.524206  324671 retry.go:31] will retry after 155.41309ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:44:52Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:44:52.680632  324671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:52.694960  324671 pause.go:52] kubelet running: false
	I1123 08:44:52.695019  324671 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:44:52.805064  324671 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:44:52.805142  324671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:44:52.869019  324671 cri.go:89] found id: "f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1"
	I1123 08:44:52.869043  324671 cri.go:89] found id: "135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0"
	I1123 08:44:52.869049  324671 cri.go:89] found id: "b6dfe80ea3b16b5150f5fa470618ac178300fcb9e23a1f7330b5e2eb5323283a"
	I1123 08:44:52.869061  324671 cri.go:89] found id: "5ce6e1dfcb7c884f99931e26214cf08dbcae9379b1bc1809cea34932337b31b6"
	I1123 08:44:52.869066  324671 cri.go:89] found id: "70f7a98bf3bcabe6f3dedc01cac36ba0d16d6237cea0f65d2be7fa4010cb20fd"
	I1123 08:44:52.869070  324671 cri.go:89] found id: "b7e1f0dc24243d35d45e248318e57982b8b5348e8dd327ede27a379524ecc12c"
	I1123 08:44:52.869074  324671 cri.go:89] found id: ""
	I1123 08:44:52.869127  324671 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:44:52.880290  324671 retry.go:31] will retry after 431.818125ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:44:52Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:44:53.312957  324671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:53.325364  324671 pause.go:52] kubelet running: false
	I1123 08:44:53.325408  324671 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:44:53.446527  324671 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:44:53.446611  324671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:44:53.508760  324671 cri.go:89] found id: "f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1"
	I1123 08:44:53.508781  324671 cri.go:89] found id: "135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0"
	I1123 08:44:53.508785  324671 cri.go:89] found id: "b6dfe80ea3b16b5150f5fa470618ac178300fcb9e23a1f7330b5e2eb5323283a"
	I1123 08:44:53.508788  324671 cri.go:89] found id: "5ce6e1dfcb7c884f99931e26214cf08dbcae9379b1bc1809cea34932337b31b6"
	I1123 08:44:53.508791  324671 cri.go:89] found id: "70f7a98bf3bcabe6f3dedc01cac36ba0d16d6237cea0f65d2be7fa4010cb20fd"
	I1123 08:44:53.508794  324671 cri.go:89] found id: "b7e1f0dc24243d35d45e248318e57982b8b5348e8dd327ede27a379524ecc12c"
	I1123 08:44:53.508797  324671 cri.go:89] found id: ""
	I1123 08:44:53.508863  324671 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:44:53.520196  324671 retry.go:31] will retry after 747.980382ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:44:53Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:44:54.268986  324671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:44:54.281784  324671 pause.go:52] kubelet running: false
	I1123 08:44:54.281824  324671 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:44:54.393011  324671 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:44:54.393086  324671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:44:54.457820  324671 cri.go:89] found id: "f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1"
	I1123 08:44:54.457846  324671 cri.go:89] found id: "135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0"
	I1123 08:44:54.457852  324671 cri.go:89] found id: "b6dfe80ea3b16b5150f5fa470618ac178300fcb9e23a1f7330b5e2eb5323283a"
	I1123 08:44:54.457857  324671 cri.go:89] found id: "5ce6e1dfcb7c884f99931e26214cf08dbcae9379b1bc1809cea34932337b31b6"
	I1123 08:44:54.457872  324671 cri.go:89] found id: "70f7a98bf3bcabe6f3dedc01cac36ba0d16d6237cea0f65d2be7fa4010cb20fd"
	I1123 08:44:54.457877  324671 cri.go:89] found id: "b7e1f0dc24243d35d45e248318e57982b8b5348e8dd327ede27a379524ecc12c"
	I1123 08:44:54.457882  324671 cri.go:89] found id: ""
	I1123 08:44:54.457940  324671 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:44:54.473154  324671 out.go:203] 
	W1123 08:44:54.474560  324671 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:44:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:44:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:44:54.474580  324671 out.go:285] * 
	* 
	W1123 08:44:54.478559  324671 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:44:54.479605  324671 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-653361 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-653361
helpers_test.go:243: (dbg) docker inspect newest-cni-653361:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20",
	        "Created": "2025-11-23T08:44:05.576543108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321303,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:44:41.135024157Z",
	            "FinishedAt": "2025-11-23T08:44:39.975994346Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/hostname",
	        "HostsPath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/hosts",
	        "LogPath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20-json.log",
	        "Name": "/newest-cni-653361",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-653361:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-653361",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20",
	                "LowerDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-653361",
	                "Source": "/var/lib/docker/volumes/newest-cni-653361/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-653361",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-653361",
	                "name.minikube.sigs.k8s.io": "newest-cni-653361",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f8ec62e78818dac2487c6c397cbd7706b201bebbf0e92acfb981f1fadf162795",
	            "SandboxKey": "/var/run/docker/netns/f8ec62e78818",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-653361": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1a370c90bc560610803aaed5e7a991a85cacb2851129df90c5009b204f306e40",
	                    "EndpointID": "91eead2459b45cbefbb9c8c6cf75fe0d864d31723f3b13271af7d92c5330b01c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "56:cc:3a:56:1f:1a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-653361",
	                        "780e326c9456"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653361 -n newest-cni-653361
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653361 -n newest-cni-653361: exit status 2 (320.550639ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-653361 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-351793 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo containerd config dump                                                                                                                                                                                                  │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo crio config                                                                                                                                                                                                             │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p bridge-351793                                                                                                                                                                                                                              │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ stop    │ -p old-k8s-version-057894 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-057894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p old-k8s-version-057894 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-653361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p newest-cni-653361 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-726261 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p no-preload-187607 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p newest-cni-653361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-726261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-187607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ image   │ newest-cni-653361 image list --format=json                                                                                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ pause   │ -p newest-cni-653361 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:44:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:44:51.256964  323816 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:44:51.257241  323816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:51.257254  323816 out.go:374] Setting ErrFile to fd 2...
	I1123 08:44:51.257261  323816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:51.257594  323816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:44:51.258110  323816 out.go:368] Setting JSON to false
	I1123 08:44:51.259310  323816 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5238,"bootTime":1763882253,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:44:51.259395  323816 start.go:143] virtualization: kvm guest
	I1123 08:44:51.261112  323816 out.go:179] * [no-preload-187607] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:44:51.262608  323816 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:44:51.262600  323816 notify.go:221] Checking for updates...
	I1123 08:44:51.264810  323816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:44:51.266546  323816 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:51.268511  323816 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:44:51.269836  323816 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:44:51.271053  323816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:44:51.272662  323816 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:44:51.273436  323816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:44:51.311817  323816 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:44:51.311979  323816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:51.393529  323816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 08:44:51.379493393 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:51.393629  323816 docker.go:319] overlay module found
	I1123 08:44:51.395740  323816 out.go:179] * Using the docker driver based on existing profile
	W1123 08:44:47.736642  314636 pod_ready.go:104] pod "coredns-5dd5756b68-t8zg8" is not "Ready", error: <nil>
	W1123 08:44:49.737538  314636 pod_ready.go:104] pod "coredns-5dd5756b68-t8zg8" is not "Ready", error: <nil>
	I1123 08:44:51.336948  321110 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:44:51.343944  321110 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:44:51.345650  321110 api_server.go:141] control plane version: v1.34.1
	I1123 08:44:51.346641  321110 api_server.go:131] duration metric: took 3.010272063s to wait for apiserver health ...
	I1123 08:44:51.346704  321110 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:51.350295  321110 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:51.350333  321110 system_pods.go:61] "coredns-66bc5c9577-csqvp" [16c414c2-c00b-4553-b06f-85581d629662] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:44:51.350346  321110 system_pods.go:61] "etcd-newest-cni-653361" [c88c51f3-384a-4e42-a5b5-eb56b4063ca0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:44:51.350364  321110 system_pods.go:61] "kindnet-sv4xk" [bf003336-6803-41a9-aaea-9aba51c062be] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:44:51.350373  321110 system_pods.go:61] "kube-apiserver-newest-cni-653361" [555ae394-11ee-4c38-9844-0eb84e52169e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:44:51.350381  321110 system_pods.go:61] "kube-controller-manager-newest-cni-653361" [65cfedeb-a3c7-4a0c-a38f-30b249ee0c5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:44:51.350389  321110 system_pods.go:61] "kube-proxy-hwjc5" [4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:44:51.350397  321110 system_pods.go:61] "kube-scheduler-newest-cni-653361" [158da57a-3f1c-4de3-94b2-d90400674ba2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:44:51.350403  321110 system_pods.go:61] "storage-provisioner" [3d48cd45-8d74-48f3-8cab-01e61921311b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:44:51.350411  321110 system_pods.go:74] duration metric: took 3.696826ms to wait for pod list to return data ...
	I1123 08:44:51.350421  321110 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:51.352581  321110 default_sa.go:45] found service account: "default"
	I1123 08:44:51.352628  321110 default_sa.go:55] duration metric: took 2.200583ms for default service account to be created ...
	I1123 08:44:51.352652  321110 kubeadm.go:587] duration metric: took 3.175846129s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:44:51.352680  321110 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:51.354905  321110 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:51.354943  321110 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:51.354964  321110 node_conditions.go:105] duration metric: took 2.256439ms to run NodePressure ...
	I1123 08:44:51.354977  321110 start.go:242] waiting for startup goroutines ...
	I1123 08:44:51.354987  321110 start.go:247] waiting for cluster config update ...
	I1123 08:44:51.355000  321110 start.go:256] writing updated cluster config ...
	I1123 08:44:51.355284  321110 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:51.421972  321110 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:44:51.424387  321110 out.go:179] * Done! kubectl is now configured to use "newest-cni-653361" cluster and "default" namespace by default
	I1123 08:44:51.396873  323816 start.go:309] selected driver: docker
	I1123 08:44:51.396892  323816 start.go:927] validating driver "docker" against &{Name:no-preload-187607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-187607 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:51.397005  323816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:44:51.397738  323816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:51.471676  323816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:44:51.461092497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:51.472081  323816 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:51.472116  323816 cni.go:84] Creating CNI manager for ""
	I1123 08:44:51.472191  323816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:44:51.472243  323816 start.go:353] cluster config:
	{Name:no-preload-187607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-187607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:51.474494  323816 out.go:179] * Starting "no-preload-187607" primary control-plane node in "no-preload-187607" cluster
	I1123 08:44:51.475519  323816 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:44:51.476627  323816 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:44:51.477837  323816 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:44:51.477918  323816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:44:51.477959  323816 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/config.json ...
	I1123 08:44:51.478109  323816 cache.go:107] acquiring lock: {Name:mk243db6ef967dfdc0962ecd4418258443e709e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478199  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 08:44:51.478206  323816 cache.go:107] acquiring lock: {Name:mk54f5de2002243365ff4d6f32020c6ea63cd6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478231  323816 cache.go:107] acquiring lock: {Name:mkcdec1ac1d49d8aeafd88dc92f3cb72331a1ff3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478220  323816 cache.go:107] acquiring lock: {Name:mkc97ad6397ee5d34d6ec043a25e72442fa5f78c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478214  323816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 115.129µs
	I1123 08:44:51.478248  323816 cache.go:107] acquiring lock: {Name:mkdea8d541caecb172b4ac0851b2e69805b236c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478299  323816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 08:44:51.478302  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1123 08:44:51.478284  323816 cache.go:107] acquiring lock: {Name:mk6eafd5040b0e0852dcf9c9ed8f6003a69ce1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478312  323816 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 91.299µs
	I1123 08:44:51.478321  323816 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 08:44:51.478319  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 08:44:51.478300  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 08:44:51.478324  323816 cache.go:107] acquiring lock: {Name:mkf3c34bfc9b36349c4012816ca1fe69608401f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478321  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 08:44:51.478336  323816 cache.go:107] acquiring lock: {Name:mk27c53072119ba3f4bce151d943bb45795c71f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478347  323816 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 102.309µs
	I1123 08:44:51.478357  323816 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 08:44:51.478359  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 08:44:51.478345  323816 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 148.928µs
	I1123 08:44:51.478368  323816 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 08:44:51.478332  323816 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 120.294µs
	I1123 08:44:51.478367  323816 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 169.27µs
	I1123 08:44:51.478377  323816 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 08:44:51.478371  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 08:44:51.478381  323816 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 08:44:51.478374  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 08:44:51.478388  323816 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 69.673µs
	I1123 08:44:51.478396  323816 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 08:44:51.478396  323816 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 64.138µs
	I1123 08:44:51.478403  323816 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 08:44:51.478409  323816 cache.go:87] Successfully saved all images to host disk.
	I1123 08:44:51.500917  323816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:44:51.500939  323816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:44:51.500958  323816 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:44:51.500988  323816 start.go:360] acquireMachinesLock for no-preload-187607: {Name:mkaf0effc66bd427ff0e08f3ea2ca920b96e200d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.501032  323816 start.go:364] duration metric: took 29.615µs to acquireMachinesLock for "no-preload-187607"
	I1123 08:44:51.501048  323816 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:44:51.501054  323816 fix.go:54] fixHost starting: 
	I1123 08:44:51.501339  323816 cli_runner.go:164] Run: docker container inspect no-preload-187607 --format={{.State.Status}}
	I1123 08:44:51.523581  323816 fix.go:112] recreateIfNeeded on no-preload-187607: state=Stopped err=<nil>
	W1123 08:44:51.523618  323816 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.877609023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.880449366Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cd731d83-8665-44ac-805b-60eb9cf92819 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.88312116Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.88372446Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0f1f75ba-a3ad-4e30-bf1d-803cc8201e45 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.883980284Z" level=info msg="Ran pod sandbox 58b6d9fc17f6b911e5be2a0ea7da03cd389bd14a440ae07adc364bb2912a25ba with infra container: kube-system/kube-proxy-hwjc5/POD" id=cd731d83-8665-44ac-805b-60eb9cf92819 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.8853858Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1ce944f6-6fd7-4408-bb17-00fc21a58aa1 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.885469012Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.886574216Z" level=info msg="Ran pod sandbox b8a3413c919d7fab4922c270c1476577e7e18139bf50f6afbc70d6d6a898bc27 with infra container: kube-system/kindnet-sv4xk/POD" id=0f1f75ba-a3ad-4e30-bf1d-803cc8201e45 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.886677895Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=47c85916-e81d-4262-936a-bf909bea1ce1 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.887999865Z" level=info msg="Creating container: kube-system/kube-proxy-hwjc5/kube-proxy" id=4a35608d-cdc9-47a9-ae11-90461e3ed1c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.888132142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.889422789Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=664838f5-bc37-4744-a114-d78ba79b4937 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.891906769Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=51721b7f-7dfc-4b42-a447-c880cd428175 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.893091764Z" level=info msg="Creating container: kube-system/kindnet-sv4xk/kindnet-cni" id=a14f9428-44a9-44dd-9488-23be3acdb06f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.893154682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.893178462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.893605312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.897314433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.897898567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.925591994Z" level=info msg="Created container f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1: kube-system/kindnet-sv4xk/kindnet-cni" id=a14f9428-44a9-44dd-9488-23be3acdb06f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.926498375Z" level=info msg="Starting container: f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1" id=fc87c2b2-fed4-43df-97ea-a34eacbcf5e1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.928715889Z" level=info msg="Started container" PID=1048 containerID=f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1 description=kube-system/kindnet-sv4xk/kindnet-cni id=fc87c2b2-fed4-43df-97ea-a34eacbcf5e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8a3413c919d7fab4922c270c1476577e7e18139bf50f6afbc70d6d6a898bc27
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.932792699Z" level=info msg="Created container 135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0: kube-system/kube-proxy-hwjc5/kube-proxy" id=4a35608d-cdc9-47a9-ae11-90461e3ed1c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.93400139Z" level=info msg="Starting container: 135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0" id=53ec6366-7ece-46f6-900c-765c4b89dfcf name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.937596445Z" level=info msg="Started container" PID=1047 containerID=135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0 description=kube-system/kube-proxy-hwjc5/kube-proxy id=53ec6366-7ece-46f6-900c-765c4b89dfcf name=/runtime.v1.RuntimeService/StartContainer sandboxID=58b6d9fc17f6b911e5be2a0ea7da03cd389bd14a440ae07adc364bb2912a25ba
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f52335a55faac       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   b8a3413c919d7       kindnet-sv4xk                               kube-system
	135fb471041da       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   58b6d9fc17f6b       kube-proxy-hwjc5                            kube-system
	b6dfe80ea3b16       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   f9bba933ea988       kube-controller-manager-newest-cni-653361   kube-system
	5ce6e1dfcb7c8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   ebee91a91a754       kube-scheduler-newest-cni-653361            kube-system
	70f7a98bf3bca       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   0193c8849204e       kube-apiserver-newest-cni-653361            kube-system
	b7e1f0dc24243       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   8b5947103e30e       etcd-newest-cni-653361                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-653361
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-653361
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=newest-cni-653361
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:44:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-653361
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:44:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:44:49 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:44:49 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:44:49 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 08:44:49 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-653361
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ad84826e-a86e-489e-9a4b-5295789043d1
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-653361                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-sv4xk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-653361             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-653361    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-hwjc5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-653361             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node newest-cni-653361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node newest-cni-653361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node newest-cni-653361 status is now: NodeHasSufficientPID
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s                kubelet          Node newest-cni-653361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s                kubelet          Node newest-cni-653361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s                kubelet          Node newest-cni-653361 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node newest-cni-653361 event: Registered Node newest-cni-653361 in Controller
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-653361 event: Registered Node newest-cni-653361 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [b7e1f0dc24243d35d45e248318e57982b8b5348e8dd327ede27a379524ecc12c] <==
	{"level":"warn","ts":"2025-11-23T08:44:49.181116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.187432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.199499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.205565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.211417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.217153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.224032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.230058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.236607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.242655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.262812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.268289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.280937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.286775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.293475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.299970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.305893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.312240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.319091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.325669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.332401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.346654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.352480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.358326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.405071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59226","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:44:55 up  1:27,  0 user,  load average: 5.44, 3.87, 2.40
	Linux newest-cni-653361 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1] <==
	I1123 08:44:51.200027       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:51.200304       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:44:51.200467       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:51.200486       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:51.200509       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:51.599622       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:51.599811       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:51.599835       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:51.599975       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:44:51.900752       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:44:51.900802       1 metrics.go:72] Registering metrics
	I1123 08:44:51.900895       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [70f7a98bf3bcabe6f3dedc01cac36ba0d16d6237cea0f65d2be7fa4010cb20fd] <==
	I1123 08:44:49.876576       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:44:49.876737       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:44:49.877915       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 08:44:49.883367       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1123 08:44:49.884393       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:44:49.887112       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:44:49.895141       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 08:44:49.895222       1 policy_source.go:240] refreshing policies
	I1123 08:44:49.898458       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 08:44:49.898509       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:44:49.898518       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:44:49.898525       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:44:49.898533       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:44:49.916093       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:44:50.148442       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:44:50.179039       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:44:50.197826       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:44:50.204682       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:44:50.212969       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:44:50.245020       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.254.191"}
	I1123 08:44:50.256784       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.188.156"}
	I1123 08:44:50.783417       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:44:52.989968       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:44:53.140938       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:44:53.289461       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b6dfe80ea3b16b5150f5fa470618ac178300fcb9e23a1f7330b5e2eb5323283a] <==
	I1123 08:44:52.703832       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:44:52.722020       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:52.732265       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:44:52.736766       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:44:52.736789       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:44:52.737212       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:44:52.737230       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:44:52.737207       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:44:52.737335       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:44:52.737572       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:44:52.739020       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:44:52.739043       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:44:52.739089       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:44:52.739105       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:44:52.739146       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:44:52.739134       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:44:52.739212       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-653361"
	I1123 08:44:52.739297       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:44:52.741378       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:44:52.742508       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:44:52.745015       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:44:52.746256       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:52.750397       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:44:52.753672       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:44:52.761044       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0] <==
	I1123 08:44:50.976077       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:44:51.037759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:44:51.138288       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:44:51.138346       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 08:44:51.138439       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:44:51.164199       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:51.164270       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:44:51.170025       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:44:51.170506       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:44:51.170538       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:51.172290       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:44:51.172372       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:44:51.172413       1 config.go:309] "Starting node config controller"
	I1123 08:44:51.172481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:44:51.172509       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:44:51.172437       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:44:51.172553       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:44:51.172427       1 config.go:200] "Starting service config controller"
	I1123 08:44:51.172606       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:44:51.273909       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:44:51.273970       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:44:51.274204       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5ce6e1dfcb7c884f99931e26214cf08dbcae9379b1bc1809cea34932337b31b6] <==
	I1123 08:44:48.646154       1 serving.go:386] Generated self-signed cert in-memory
	W1123 08:44:49.801621       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 08:44:49.801944       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:44:49.801972       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 08:44:49.801983       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 08:44:49.833906       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:44:49.833997       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:49.836970       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:49.837078       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:49.837359       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:44:49.837470       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:44:49.937937       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: E1123 08:44:49.601423     675 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-653361\" not found" node="newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.867121     675 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.919799     675 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.919879     675 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.919911     675 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.920774     675 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: E1123 08:44:49.979312     675 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-653361\" already exists" pod="kube-system/kube-apiserver-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.979356     675 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: E1123 08:44:49.984939     675 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-653361\" already exists" pod="kube-system/kube-controller-manager-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.984977     675 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: E1123 08:44:49.991507     675 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-653361\" already exists" pod="kube-system/kube-scheduler-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.991541     675 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: E1123 08:44:49.996758     675 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-653361\" already exists" pod="kube-system/etcd-newest-cni-653361"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.566879     675 apiserver.go:52] "Watching apiserver"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.602379     675 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-653361"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: E1123 08:44:50.608925     675 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-653361\" already exists" pod="kube-system/kube-apiserver-newest-cni-653361"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.666194     675 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.719310     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf003336-6803-41a9-aaea-9aba51c062be-xtables-lock\") pod \"kindnet-sv4xk\" (UID: \"bf003336-6803-41a9-aaea-9aba51c062be\") " pod="kube-system/kindnet-sv4xk"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.719413     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f-xtables-lock\") pod \"kube-proxy-hwjc5\" (UID: \"4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f\") " pod="kube-system/kube-proxy-hwjc5"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.719555     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f-lib-modules\") pod \"kube-proxy-hwjc5\" (UID: \"4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f\") " pod="kube-system/kube-proxy-hwjc5"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.719610     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bf003336-6803-41a9-aaea-9aba51c062be-cni-cfg\") pod \"kindnet-sv4xk\" (UID: \"bf003336-6803-41a9-aaea-9aba51c062be\") " pod="kube-system/kindnet-sv4xk"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.719636     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf003336-6803-41a9-aaea-9aba51c062be-lib-modules\") pod \"kindnet-sv4xk\" (UID: \"bf003336-6803-41a9-aaea-9aba51c062be\") " pod="kube-system/kindnet-sv4xk"
	Nov 23 08:44:52 newest-cni-653361 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:44:52 newest-cni-653361 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:44:52 newest-cni-653361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653361 -n newest-cni-653361
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653361 -n newest-cni-653361: exit status 2 (348.723862ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-653361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-csqvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wh9r4 kubernetes-dashboard-855c9754f9-hjqnz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-653361 describe pod coredns-66bc5c9577-csqvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wh9r4 kubernetes-dashboard-855c9754f9-hjqnz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-653361 describe pod coredns-66bc5c9577-csqvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wh9r4 kubernetes-dashboard-855c9754f9-hjqnz: exit status 1 (71.969788ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-csqvp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-wh9r4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-hjqnz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-653361 describe pod coredns-66bc5c9577-csqvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wh9r4 kubernetes-dashboard-855c9754f9-hjqnz: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-653361
helpers_test.go:243: (dbg) docker inspect newest-cni-653361:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20",
	        "Created": "2025-11-23T08:44:05.576543108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321303,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:44:41.135024157Z",
	            "FinishedAt": "2025-11-23T08:44:39.975994346Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/hostname",
	        "HostsPath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/hosts",
	        "LogPath": "/var/lib/docker/containers/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20/780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20-json.log",
	        "Name": "/newest-cni-653361",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-653361:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-653361",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "780e326c9456d4af544772125bc6c5459e6e3337774d7fedda72a7cd09c42c20",
	                "LowerDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30914559dfaf0273329572c2b9117420f29d2e732b20c473c6b39e0295ed8d3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-653361",
	                "Source": "/var/lib/docker/volumes/newest-cni-653361/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-653361",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-653361",
	                "name.minikube.sigs.k8s.io": "newest-cni-653361",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f8ec62e78818dac2487c6c397cbd7706b201bebbf0e92acfb981f1fadf162795",
	            "SandboxKey": "/var/run/docker/netns/f8ec62e78818",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-653361": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1a370c90bc560610803aaed5e7a991a85cacb2851129df90c5009b204f306e40",
	                    "EndpointID": "91eead2459b45cbefbb9c8c6cf75fe0d864d31723f3b13271af7d92c5330b01c",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "56:cc:3a:56:1f:1a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-653361",
	                        "780e326c9456"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653361 -n newest-cni-653361
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653361 -n newest-cni-653361: exit status 2 (348.833414ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-653361 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-653361 logs -n 25: (1.088190471s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-351793 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo containerd config dump                                                                                                                                                                                                  │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ ssh     │ -p bridge-351793 sudo crio config                                                                                                                                                                                                             │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ delete  │ -p bridge-351793                                                                                                                                                                                                                              │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ stop    │ -p old-k8s-version-057894 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-057894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p old-k8s-version-057894 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-653361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p newest-cni-653361 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-726261 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p no-preload-187607 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p newest-cni-653361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-726261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-187607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ image   │ newest-cni-653361 image list --format=json                                                                                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ pause   │ -p newest-cni-653361 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:44:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:44:51.256964  323816 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:44:51.257241  323816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:51.257254  323816 out.go:374] Setting ErrFile to fd 2...
	I1123 08:44:51.257261  323816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:51.257594  323816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:44:51.258110  323816 out.go:368] Setting JSON to false
	I1123 08:44:51.259310  323816 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5238,"bootTime":1763882253,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:44:51.259395  323816 start.go:143] virtualization: kvm guest
	I1123 08:44:51.261112  323816 out.go:179] * [no-preload-187607] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:44:51.262608  323816 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:44:51.262600  323816 notify.go:221] Checking for updates...
	I1123 08:44:51.264810  323816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:44:51.266546  323816 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:44:51.268511  323816 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:44:51.269836  323816 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:44:51.271053  323816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:44:51.272662  323816 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:44:51.273436  323816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:44:51.311817  323816 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:44:51.311979  323816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:51.393529  323816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 08:44:51.379493393 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:51.393629  323816 docker.go:319] overlay module found
	I1123 08:44:51.395740  323816 out.go:179] * Using the docker driver based on existing profile
	W1123 08:44:47.736642  314636 pod_ready.go:104] pod "coredns-5dd5756b68-t8zg8" is not "Ready", error: <nil>
	W1123 08:44:49.737538  314636 pod_ready.go:104] pod "coredns-5dd5756b68-t8zg8" is not "Ready", error: <nil>
	I1123 08:44:51.336948  321110 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:44:51.343944  321110 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:44:51.345650  321110 api_server.go:141] control plane version: v1.34.1
	I1123 08:44:51.346641  321110 api_server.go:131] duration metric: took 3.010272063s to wait for apiserver health ...
	I1123 08:44:51.346704  321110 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:44:51.350295  321110 system_pods.go:59] 8 kube-system pods found
	I1123 08:44:51.350333  321110 system_pods.go:61] "coredns-66bc5c9577-csqvp" [16c414c2-c00b-4553-b06f-85581d629662] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:44:51.350346  321110 system_pods.go:61] "etcd-newest-cni-653361" [c88c51f3-384a-4e42-a5b5-eb56b4063ca0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:44:51.350364  321110 system_pods.go:61] "kindnet-sv4xk" [bf003336-6803-41a9-aaea-9aba51c062be] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:44:51.350373  321110 system_pods.go:61] "kube-apiserver-newest-cni-653361" [555ae394-11ee-4c38-9844-0eb84e52169e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:44:51.350381  321110 system_pods.go:61] "kube-controller-manager-newest-cni-653361" [65cfedeb-a3c7-4a0c-a38f-30b249ee0c5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:44:51.350389  321110 system_pods.go:61] "kube-proxy-hwjc5" [4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:44:51.350397  321110 system_pods.go:61] "kube-scheduler-newest-cni-653361" [158da57a-3f1c-4de3-94b2-d90400674ba2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:44:51.350403  321110 system_pods.go:61] "storage-provisioner" [3d48cd45-8d74-48f3-8cab-01e61921311b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 08:44:51.350411  321110 system_pods.go:74] duration metric: took 3.696826ms to wait for pod list to return data ...
	I1123 08:44:51.350421  321110 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:44:51.352581  321110 default_sa.go:45] found service account: "default"
	I1123 08:44:51.352628  321110 default_sa.go:55] duration metric: took 2.200583ms for default service account to be created ...
	I1123 08:44:51.352652  321110 kubeadm.go:587] duration metric: took 3.175846129s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 08:44:51.352680  321110 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:44:51.354905  321110 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:44:51.354943  321110 node_conditions.go:123] node cpu capacity is 8
	I1123 08:44:51.354964  321110 node_conditions.go:105] duration metric: took 2.256439ms to run NodePressure ...
	I1123 08:44:51.354977  321110 start.go:242] waiting for startup goroutines ...
	I1123 08:44:51.354987  321110 start.go:247] waiting for cluster config update ...
	I1123 08:44:51.355000  321110 start.go:256] writing updated cluster config ...
	I1123 08:44:51.355284  321110 ssh_runner.go:195] Run: rm -f paused
	I1123 08:44:51.421972  321110 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:44:51.424387  321110 out.go:179] * Done! kubectl is now configured to use "newest-cni-653361" cluster and "default" namespace by default
	I1123 08:44:51.396873  323816 start.go:309] selected driver: docker
	I1123 08:44:51.396892  323816 start.go:927] validating driver "docker" against &{Name:no-preload-187607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-187607 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:51.397005  323816 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:44:51.397738  323816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:44:51.471676  323816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:44:51.461092497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:44:51.472081  323816 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:44:51.472116  323816 cni.go:84] Creating CNI manager for ""
	I1123 08:44:51.472191  323816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:44:51.472243  323816 start.go:353] cluster config:
	{Name:no-preload-187607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-187607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:44:51.474494  323816 out.go:179] * Starting "no-preload-187607" primary control-plane node in "no-preload-187607" cluster
	I1123 08:44:51.475519  323816 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:44:51.476627  323816 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:44:51.477837  323816 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:44:51.477918  323816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:44:51.477959  323816 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/config.json ...
	I1123 08:44:51.478109  323816 cache.go:107] acquiring lock: {Name:mk243db6ef967dfdc0962ecd4418258443e709e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478199  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 08:44:51.478206  323816 cache.go:107] acquiring lock: {Name:mk54f5de2002243365ff4d6f32020c6ea63cd6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478231  323816 cache.go:107] acquiring lock: {Name:mkcdec1ac1d49d8aeafd88dc92f3cb72331a1ff3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478220  323816 cache.go:107] acquiring lock: {Name:mkc97ad6397ee5d34d6ec043a25e72442fa5f78c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478214  323816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 115.129µs
	I1123 08:44:51.478248  323816 cache.go:107] acquiring lock: {Name:mkdea8d541caecb172b4ac0851b2e69805b236c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478299  323816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 08:44:51.478302  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1123 08:44:51.478284  323816 cache.go:107] acquiring lock: {Name:mk6eafd5040b0e0852dcf9c9ed8f6003a69ce1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478312  323816 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 91.299µs
	I1123 08:44:51.478321  323816 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 08:44:51.478319  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 08:44:51.478300  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 08:44:51.478324  323816 cache.go:107] acquiring lock: {Name:mkf3c34bfc9b36349c4012816ca1fe69608401f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478321  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 08:44:51.478336  323816 cache.go:107] acquiring lock: {Name:mk27c53072119ba3f4bce151d943bb45795c71f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.478347  323816 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 102.309µs
	I1123 08:44:51.478357  323816 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 08:44:51.478359  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 08:44:51.478345  323816 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 148.928µs
	I1123 08:44:51.478368  323816 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 08:44:51.478332  323816 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 120.294µs
	I1123 08:44:51.478367  323816 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 169.27µs
	I1123 08:44:51.478377  323816 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 08:44:51.478371  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 08:44:51.478381  323816 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 08:44:51.478374  323816 cache.go:115] /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 08:44:51.478388  323816 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 69.673µs
	I1123 08:44:51.478396  323816 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 08:44:51.478396  323816 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 64.138µs
	I1123 08:44:51.478403  323816 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21966-10964/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 08:44:51.478409  323816 cache.go:87] Successfully saved all images to host disk.
	I1123 08:44:51.500917  323816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:44:51.500939  323816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:44:51.500958  323816 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:44:51.500988  323816 start.go:360] acquireMachinesLock for no-preload-187607: {Name:mkaf0effc66bd427ff0e08f3ea2ca920b96e200d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:44:51.501032  323816 start.go:364] duration metric: took 29.615µs to acquireMachinesLock for "no-preload-187607"
	I1123 08:44:51.501048  323816 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:44:51.501054  323816 fix.go:54] fixHost starting: 
	I1123 08:44:51.501339  323816 cli_runner.go:164] Run: docker container inspect no-preload-187607 --format={{.State.Status}}
	I1123 08:44:51.523581  323816 fix.go:112] recreateIfNeeded on no-preload-187607: state=Stopped err=<nil>
	W1123 08:44:51.523618  323816 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:44:50.641318  323135 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-726261" ...
	I1123 08:44:50.641384  323135 cli_runner.go:164] Run: docker start default-k8s-diff-port-726261
	I1123 08:44:51.008699  323135 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-726261 --format={{.State.Status}}
	I1123 08:44:51.031887  323135 kic.go:430] container "default-k8s-diff-port-726261" state is running.
	I1123 08:44:51.032385  323135 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-726261
	I1123 08:44:51.055937  323135 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/default-k8s-diff-port-726261/config.json ...
	I1123 08:44:51.056202  323135 machine.go:94] provisionDockerMachine start ...
	I1123 08:44:51.056275  323135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:44:51.076225  323135 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:51.076524  323135 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1123 08:44:51.076542  323135 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:44:51.077055  323135 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37034->127.0.0.1:33121: read: connection reset by peer
	I1123 08:44:54.217102  323135 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-726261
	
	I1123 08:44:54.217143  323135 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-726261"
	I1123 08:44:54.217216  323135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:44:54.235022  323135 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:54.235319  323135 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1123 08:44:54.235339  323135 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-726261 && echo "default-k8s-diff-port-726261" | sudo tee /etc/hostname
	I1123 08:44:54.388672  323135 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-726261
	
	I1123 08:44:54.388778  323135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:44:54.407594  323135 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:54.407838  323135 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1123 08:44:54.407856  323135 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-726261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-726261/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-726261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:44:54.556514  323135 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:44:54.556543  323135 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:44:54.556564  323135 ubuntu.go:190] setting up certificates
	I1123 08:44:54.556582  323135 provision.go:84] configureAuth start
	I1123 08:44:54.556631  323135 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-726261
	I1123 08:44:54.577268  323135 provision.go:143] copyHostCerts
	I1123 08:44:54.577330  323135 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:44:54.577344  323135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:44:54.577423  323135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:44:54.577564  323135 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:44:54.577577  323135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:44:54.577616  323135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:44:54.577752  323135 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:44:54.577765  323135 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:44:54.577803  323135 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:44:54.577894  323135 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-726261 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-726261 localhost minikube]
	I1123 08:44:54.618040  323135 provision.go:177] copyRemoteCerts
	I1123 08:44:54.618085  323135 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:44:54.618117  323135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:44:54.634601  323135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa Username:docker}
	I1123 08:44:54.735971  323135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 08:44:54.755735  323135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:44:54.775282  323135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:44:54.794673  323135 provision.go:87] duration metric: took 238.068753ms to configureAuth
	I1123 08:44:54.794711  323135 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:44:54.794895  323135 config.go:182] Loaded profile config "default-k8s-diff-port-726261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:44:54.795002  323135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:44:54.813417  323135 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:54.813627  323135 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1123 08:44:54.813654  323135 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:44:55.152458  323135 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:44:55.152985  323135 machine.go:97] duration metric: took 4.096261948s to provisionDockerMachine
	I1123 08:44:55.153020  323135 start.go:293] postStartSetup for "default-k8s-diff-port-726261" (driver="docker")
	I1123 08:44:55.153034  323135 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:44:55.153094  323135 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:44:55.153142  323135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:44:55.173557  323135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa Username:docker}
	I1123 08:44:55.277847  323135 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:44:55.281346  323135 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:44:55.281368  323135 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:44:55.281377  323135 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:44:55.281420  323135 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:44:55.281492  323135 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:44:55.281572  323135 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:44:55.289382  323135 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:44:55.307666  323135 start.go:296] duration metric: took 154.633383ms for postStartSetup
	I1123 08:44:55.307759  323135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:44:55.307811  323135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:44:55.326123  323135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa Username:docker}
	I1123 08:44:51.525606  323816 out.go:252] * Restarting existing docker container for "no-preload-187607" ...
	I1123 08:44:51.525726  323816 cli_runner.go:164] Run: docker start no-preload-187607
	I1123 08:44:51.817487  323816 cli_runner.go:164] Run: docker container inspect no-preload-187607 --format={{.State.Status}}
	I1123 08:44:51.838456  323816 kic.go:430] container "no-preload-187607" state is running.
	I1123 08:44:51.838825  323816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-187607
	I1123 08:44:51.862324  323816 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/no-preload-187607/config.json ...
	I1123 08:44:51.862595  323816 machine.go:94] provisionDockerMachine start ...
	I1123 08:44:51.862675  323816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:44:51.883377  323816 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:51.883605  323816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1123 08:44:51.883616  323816 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:44:51.884285  323816 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50358->127.0.0.1:33126: read: connection reset by peer
	I1123 08:44:55.036242  323816 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-187607
	
	I1123 08:44:55.036280  323816 ubuntu.go:182] provisioning hostname "no-preload-187607"
	I1123 08:44:55.036365  323816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:44:55.055952  323816 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:55.056186  323816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1123 08:44:55.056212  323816 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-187607 && echo "no-preload-187607" | sudo tee /etc/hostname
	I1123 08:44:55.216169  323816 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-187607
	
	I1123 08:44:55.216256  323816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:44:55.238153  323816 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:55.238354  323816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1123 08:44:55.238370  323816 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-187607' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-187607/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-187607' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:44:55.386010  323816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:44:55.386035  323816 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:44:55.386061  323816 ubuntu.go:190] setting up certificates
	I1123 08:44:55.386075  323816 provision.go:84] configureAuth start
	I1123 08:44:55.386134  323816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-187607
	I1123 08:44:55.404506  323816 provision.go:143] copyHostCerts
	I1123 08:44:55.404566  323816 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:44:55.404585  323816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:44:55.404673  323816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:44:55.404864  323816 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:44:55.404882  323816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:44:55.404930  323816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:44:55.405037  323816 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:44:55.405049  323816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:44:55.405089  323816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:44:55.405182  323816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.no-preload-187607 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-187607]
	I1123 08:44:55.530132  323816 provision.go:177] copyRemoteCerts
	I1123 08:44:55.530190  323816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:44:55.530223  323816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:44:55.549149  323816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/no-preload-187607/id_rsa Username:docker}
	I1123 08:44:55.655397  323816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:44:55.674337  323816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:44:55.692902  323816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:44:55.710517  323816 provision.go:87] duration metric: took 324.428872ms to configureAuth
	I1123 08:44:55.710542  323816 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:44:55.710759  323816 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:44:55.710868  323816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:44:55.731350  323816 main.go:143] libmachine: Using SSH client type: native
	I1123 08:44:55.731683  323816 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1123 08:44:55.731735  323816 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:44:56.086192  323816 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:44:56.086220  323816 machine.go:97] duration metric: took 4.22360789s to provisionDockerMachine
	I1123 08:44:56.086233  323816 start.go:293] postStartSetup for "no-preload-187607" (driver="docker")
	I1123 08:44:56.086245  323816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:44:56.086304  323816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:44:56.086542  323816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:44:56.108443  323816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/no-preload-187607/id_rsa Username:docker}
	I1123 08:44:56.212675  323816 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:44:56.216339  323816 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:44:56.216362  323816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:44:56.216371  323816 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:44:56.216423  323816 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:44:56.216518  323816 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:44:56.216634  323816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:44:56.224325  323816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:44:56.251798  323816 start.go:296] duration metric: took 165.550608ms for postStartSetup
	I1123 08:44:56.251923  323816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:44:56.251968  323816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	W1123 08:44:51.738571  314636 pod_ready.go:104] pod "coredns-5dd5756b68-t8zg8" is not "Ready", error: <nil>
	W1123 08:44:54.236421  314636 pod_ready.go:104] pod "coredns-5dd5756b68-t8zg8" is not "Ready", error: <nil>
	W1123 08:44:56.247948  314636 pod_ready.go:104] pod "coredns-5dd5756b68-t8zg8" is not "Ready", error: <nil>
	I1123 08:44:55.424931  323135 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:44:55.429348  323135 fix.go:56] duration metric: took 4.810418683s for fixHost
	I1123 08:44:55.429371  323135 start.go:83] releasing machines lock for "default-k8s-diff-port-726261", held for 4.810465068s
	I1123 08:44:55.429430  323135 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-726261
	I1123 08:44:55.448083  323135 ssh_runner.go:195] Run: cat /version.json
	I1123 08:44:55.448157  323135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:44:55.448264  323135 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:44:55.448342  323135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:44:55.466944  323135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa Username:docker}
	I1123 08:44:55.468230  323135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa Username:docker}
	I1123 08:44:55.626155  323135 ssh_runner.go:195] Run: systemctl --version
	I1123 08:44:55.632706  323135 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:44:55.669089  323135 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:44:55.673841  323135 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:44:55.673895  323135 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:44:55.681751  323135 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:44:55.681772  323135 start.go:496] detecting cgroup driver to use...
	I1123 08:44:55.681801  323135 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:44:55.681841  323135 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:44:55.697263  323135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:44:55.710427  323135 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:44:55.710475  323135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:44:55.726459  323135 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:44:55.740356  323135 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:44:55.840039  323135 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:44:55.929946  323135 docker.go:234] disabling docker service ...
	I1123 08:44:55.930008  323135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:44:55.944088  323135 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:44:55.956388  323135 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:44:56.047004  323135 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:44:56.148846  323135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:44:56.161214  323135 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:44:56.177191  323135 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:44:56.177242  323135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:56.186762  323135 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:44:56.186824  323135 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:56.195856  323135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:56.204728  323135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:56.214031  323135 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:44:56.222047  323135 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:56.232118  323135 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:56.249254  323135 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:44:56.263592  323135 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:44:56.272636  323135 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:44:56.281359  323135 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:44:56.369919  323135 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:44:56.514111  323135 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:44:56.514171  323135 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:44:56.518409  323135 start.go:564] Will wait 60s for crictl version
	I1123 08:44:56.518461  323135 ssh_runner.go:195] Run: which crictl
	I1123 08:44:56.522249  323135 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:44:56.556822  323135 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:44:56.556907  323135 ssh_runner.go:195] Run: crio --version
	I1123 08:44:56.593764  323135 ssh_runner.go:195] Run: crio --version
	I1123 08:44:56.626859  323135 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	
	
	==> CRI-O <==
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.877609023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.880449366Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cd731d83-8665-44ac-805b-60eb9cf92819 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.88312116Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.88372446Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=0f1f75ba-a3ad-4e30-bf1d-803cc8201e45 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.883980284Z" level=info msg="Ran pod sandbox 58b6d9fc17f6b911e5be2a0ea7da03cd389bd14a440ae07adc364bb2912a25ba with infra container: kube-system/kube-proxy-hwjc5/POD" id=cd731d83-8665-44ac-805b-60eb9cf92819 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.8853858Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1ce944f6-6fd7-4408-bb17-00fc21a58aa1 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.885469012Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.886574216Z" level=info msg="Ran pod sandbox b8a3413c919d7fab4922c270c1476577e7e18139bf50f6afbc70d6d6a898bc27 with infra container: kube-system/kindnet-sv4xk/POD" id=0f1f75ba-a3ad-4e30-bf1d-803cc8201e45 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.886677895Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=47c85916-e81d-4262-936a-bf909bea1ce1 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.887999865Z" level=info msg="Creating container: kube-system/kube-proxy-hwjc5/kube-proxy" id=4a35608d-cdc9-47a9-ae11-90461e3ed1c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.888132142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.889422789Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=664838f5-bc37-4744-a114-d78ba79b4937 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.891906769Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=51721b7f-7dfc-4b42-a447-c880cd428175 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.893091764Z" level=info msg="Creating container: kube-system/kindnet-sv4xk/kindnet-cni" id=a14f9428-44a9-44dd-9488-23be3acdb06f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.893154682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.893178462Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.893605312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.897314433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.897898567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.925591994Z" level=info msg="Created container f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1: kube-system/kindnet-sv4xk/kindnet-cni" id=a14f9428-44a9-44dd-9488-23be3acdb06f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.926498375Z" level=info msg="Starting container: f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1" id=fc87c2b2-fed4-43df-97ea-a34eacbcf5e1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.928715889Z" level=info msg="Started container" PID=1048 containerID=f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1 description=kube-system/kindnet-sv4xk/kindnet-cni id=fc87c2b2-fed4-43df-97ea-a34eacbcf5e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8a3413c919d7fab4922c270c1476577e7e18139bf50f6afbc70d6d6a898bc27
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.932792699Z" level=info msg="Created container 135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0: kube-system/kube-proxy-hwjc5/kube-proxy" id=4a35608d-cdc9-47a9-ae11-90461e3ed1c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.93400139Z" level=info msg="Starting container: 135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0" id=53ec6366-7ece-46f6-900c-765c4b89dfcf name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:50 newest-cni-653361 crio[520]: time="2025-11-23T08:44:50.937596445Z" level=info msg="Started container" PID=1047 containerID=135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0 description=kube-system/kube-proxy-hwjc5/kube-proxy id=53ec6366-7ece-46f6-900c-765c4b89dfcf name=/runtime.v1.RuntimeService/StartContainer sandboxID=58b6d9fc17f6b911e5be2a0ea7da03cd389bd14a440ae07adc364bb2912a25ba
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f52335a55faac       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   b8a3413c919d7       kindnet-sv4xk                               kube-system
	135fb471041da       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   58b6d9fc17f6b       kube-proxy-hwjc5                            kube-system
	b6dfe80ea3b16       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   f9bba933ea988       kube-controller-manager-newest-cni-653361   kube-system
	5ce6e1dfcb7c8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   ebee91a91a754       kube-scheduler-newest-cni-653361            kube-system
	70f7a98bf3bca       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   0193c8849204e       kube-apiserver-newest-cni-653361            kube-system
	b7e1f0dc24243       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   8b5947103e30e       etcd-newest-cni-653361                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-653361
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-653361
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=newest-cni-653361
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:44:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-653361
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:44:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:44:49 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:44:49 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:44:49 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 08:44:49 +0000   Sun, 23 Nov 2025 08:44:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-653361
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ad84826e-a86e-489e-9a4b-5295789043d1
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-653361                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-sv4xk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-653361             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-653361    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-hwjc5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-653361             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node newest-cni-653361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node newest-cni-653361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node newest-cni-653361 status is now: NodeHasSufficientPID
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s                kubelet          Node newest-cni-653361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet          Node newest-cni-653361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet          Node newest-cni-653361 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           33s                node-controller  Node newest-cni-653361 event: Registered Node newest-cni-653361 in Controller
	  Normal  RegisteredNode           5s                 node-controller  Node newest-cni-653361 event: Registered Node newest-cni-653361 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [b7e1f0dc24243d35d45e248318e57982b8b5348e8dd327ede27a379524ecc12c] <==
	{"level":"warn","ts":"2025-11-23T08:44:49.181116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.187432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.199499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.205565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.211417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.217153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.224032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.230058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.236607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.242655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.262812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.268289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.280937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.286775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.293475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.299970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.305893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.312240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.319091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.325669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.332401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.346654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.352480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.358326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:49.405071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59226","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:44:57 up  1:27,  0 user,  load average: 5.44, 3.87, 2.40
	Linux newest-cni-653361 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f52335a55faac6c3b060b079b5b57576efd42e67a8245a620d8a603f95bda2c1] <==
	I1123 08:44:51.200027       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:51.200304       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:44:51.200467       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:51.200486       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:51.200509       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:51.599622       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:51.599811       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:51.599835       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:51.599975       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:44:51.900752       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:44:51.900802       1 metrics.go:72] Registering metrics
	I1123 08:44:51.900895       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [70f7a98bf3bcabe6f3dedc01cac36ba0d16d6237cea0f65d2be7fa4010cb20fd] <==
	I1123 08:44:49.876576       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:44:49.876737       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:44:49.877915       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 08:44:49.883367       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1123 08:44:49.884393       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:44:49.887112       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:44:49.895141       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 08:44:49.895222       1 policy_source.go:240] refreshing policies
	I1123 08:44:49.898458       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 08:44:49.898509       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:44:49.898518       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:44:49.898525       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:44:49.898533       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:44:49.916093       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:44:50.148442       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:44:50.179039       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:44:50.197826       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:44:50.204682       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:44:50.212969       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:44:50.245020       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.254.191"}
	I1123 08:44:50.256784       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.188.156"}
	I1123 08:44:50.783417       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:44:52.989968       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:44:53.140938       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:44:53.289461       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b6dfe80ea3b16b5150f5fa470618ac178300fcb9e23a1f7330b5e2eb5323283a] <==
	I1123 08:44:52.703832       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:44:52.722020       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:52.732265       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:44:52.736766       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:44:52.736789       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:44:52.737212       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:44:52.737230       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:44:52.737207       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:44:52.737335       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:44:52.737572       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:44:52.739020       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:44:52.739043       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:44:52.739089       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:44:52.739105       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:44:52.739146       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:44:52.739134       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:44:52.739212       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-653361"
	I1123 08:44:52.739297       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:44:52.741378       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:44:52.742508       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:44:52.745015       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:44:52.746256       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:44:52.750397       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:44:52.753672       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:44:52.761044       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [135fb471041daff6cf7f286be20c0b003a2251a814d8cbf82f5225a7515e87c0] <==
	I1123 08:44:50.976077       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:44:51.037759       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:44:51.138288       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:44:51.138346       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 08:44:51.138439       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:44:51.164199       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:51.164270       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:44:51.170025       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:44:51.170506       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:44:51.170538       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:51.172290       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:44:51.172372       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:44:51.172413       1 config.go:309] "Starting node config controller"
	I1123 08:44:51.172481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:44:51.172509       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:44:51.172437       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:44:51.172553       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:44:51.172427       1 config.go:200] "Starting service config controller"
	I1123 08:44:51.172606       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:44:51.273909       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:44:51.273970       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:44:51.274204       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5ce6e1dfcb7c884f99931e26214cf08dbcae9379b1bc1809cea34932337b31b6] <==
	I1123 08:44:48.646154       1 serving.go:386] Generated self-signed cert in-memory
	W1123 08:44:49.801621       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 08:44:49.801944       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:44:49.801972       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 08:44:49.801983       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 08:44:49.833906       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:44:49.833997       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:49.836970       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:49.837078       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:49.837359       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:44:49.837470       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:44:49.937937       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: E1123 08:44:49.601423     675 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-653361\" not found" node="newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.867121     675 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.919799     675 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.919879     675 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.919911     675 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.920774     675 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: E1123 08:44:49.979312     675 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-653361\" already exists" pod="kube-system/kube-apiserver-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.979356     675 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: E1123 08:44:49.984939     675 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-653361\" already exists" pod="kube-system/kube-controller-manager-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.984977     675 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: E1123 08:44:49.991507     675 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-653361\" already exists" pod="kube-system/kube-scheduler-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: I1123 08:44:49.991541     675 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-653361"
	Nov 23 08:44:49 newest-cni-653361 kubelet[675]: E1123 08:44:49.996758     675 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-653361\" already exists" pod="kube-system/etcd-newest-cni-653361"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.566879     675 apiserver.go:52] "Watching apiserver"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.602379     675 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-653361"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: E1123 08:44:50.608925     675 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-653361\" already exists" pod="kube-system/kube-apiserver-newest-cni-653361"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.666194     675 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.719310     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf003336-6803-41a9-aaea-9aba51c062be-xtables-lock\") pod \"kindnet-sv4xk\" (UID: \"bf003336-6803-41a9-aaea-9aba51c062be\") " pod="kube-system/kindnet-sv4xk"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.719413     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f-xtables-lock\") pod \"kube-proxy-hwjc5\" (UID: \"4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f\") " pod="kube-system/kube-proxy-hwjc5"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.719555     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f-lib-modules\") pod \"kube-proxy-hwjc5\" (UID: \"4cf6eccc-cb7b-441b-ab91-4ace9d6b4c8f\") " pod="kube-system/kube-proxy-hwjc5"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.719610     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bf003336-6803-41a9-aaea-9aba51c062be-cni-cfg\") pod \"kindnet-sv4xk\" (UID: \"bf003336-6803-41a9-aaea-9aba51c062be\") " pod="kube-system/kindnet-sv4xk"
	Nov 23 08:44:50 newest-cni-653361 kubelet[675]: I1123 08:44:50.719636     675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf003336-6803-41a9-aaea-9aba51c062be-lib-modules\") pod \"kindnet-sv4xk\" (UID: \"bf003336-6803-41a9-aaea-9aba51c062be\") " pod="kube-system/kindnet-sv4xk"
	Nov 23 08:44:52 newest-cni-653361 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:44:52 newest-cni-653361 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:44:52 newest-cni-653361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653361 -n newest-cni-653361
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-653361 -n newest-cni-653361: exit status 2 (462.463046ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-653361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-csqvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wh9r4 kubernetes-dashboard-855c9754f9-hjqnz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-653361 describe pod coredns-66bc5c9577-csqvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wh9r4 kubernetes-dashboard-855c9754f9-hjqnz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-653361 describe pod coredns-66bc5c9577-csqvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wh9r4 kubernetes-dashboard-855c9754f9-hjqnz: exit status 1 (77.974282ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-csqvp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-wh9r4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-hjqnz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-653361 describe pod coredns-66bc5c9577-csqvp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-wh9r4 kubernetes-dashboard-855c9754f9-hjqnz: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-057894 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-057894 --alsologtostderr -v=1: exit status 80 (2.043280015s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-057894 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:45:11.673542  331119 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:11.673648  331119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:11.673659  331119 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:11.673665  331119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:11.673996  331119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:45:11.674287  331119 out.go:368] Setting JSON to false
	I1123 08:45:11.674314  331119 mustload.go:66] Loading cluster: old-k8s-version-057894
	I1123 08:45:11.674836  331119 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:45:11.675385  331119 cli_runner.go:164] Run: docker container inspect old-k8s-version-057894 --format={{.State.Status}}
	I1123 08:45:11.700097  331119 host.go:66] Checking if "old-k8s-version-057894" exists ...
	I1123 08:45:11.700468  331119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:11.793909  331119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-23 08:45:11.779410083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:11.794715  331119 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-057894 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 08:45:11.797186  331119 out.go:179] * Pausing node old-k8s-version-057894 ... 
	I1123 08:45:11.798248  331119 host.go:66] Checking if "old-k8s-version-057894" exists ...
	I1123 08:45:11.798574  331119 ssh_runner.go:195] Run: systemctl --version
	I1123 08:45:11.798616  331119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-057894
	I1123 08:45:11.822797  331119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/old-k8s-version-057894/id_rsa Username:docker}
	I1123 08:45:11.936472  331119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:11.955846  331119 pause.go:52] kubelet running: true
	I1123 08:45:11.955912  331119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:45:12.204242  331119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:45:12.204418  331119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:45:12.304470  331119 cri.go:89] found id: "cb4fd533dc80ea4296c3272defe70f1c36f6c1819a3a8d39ce2cd4d9e3af9f17"
	I1123 08:45:12.304496  331119 cri.go:89] found id: "833de90f7dd18d80eab0ca9aa9103b5aa80cc42b8b7287f8b42b5a3b32e0adeb"
	I1123 08:45:12.304503  331119 cri.go:89] found id: "45c3f69cfbb9e95b89ecc13be97e72337469a5dde7d9dafd2d7eb683d2e480a3"
	I1123 08:45:12.304508  331119 cri.go:89] found id: "39e55cc565f8340fb7399995b588a6585102abab97cf96c43e1cd271099cb02d"
	I1123 08:45:12.304512  331119 cri.go:89] found id: "b85f36938e98155acb198f46eeda831f2f859afb475d32fe72dec1a0e6723666"
	I1123 08:45:12.304516  331119 cri.go:89] found id: "35f8086b1de4e31006310dbc9225c47fc7ce015e3238258161e81fc2d1c7f4bd"
	I1123 08:45:12.304521  331119 cri.go:89] found id: "62bca8b239fd282ce38b86b21b9897cfdd1cd66996c68c577fb4d9a16baca0f8"
	I1123 08:45:12.304524  331119 cri.go:89] found id: "46e574a85cdd50d2ed3dfea9bf9e72260185653dd7313da97ccc3c575be7c1e6"
	I1123 08:45:12.304528  331119 cri.go:89] found id: "5ed59b21f5fe5a105c3165b1f30786d03b6ba7fda1e27532fd0541a8a4b0df67"
	I1123 08:45:12.304542  331119 cri.go:89] found id: "0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378"
	I1123 08:45:12.304553  331119 cri.go:89] found id: "ca13a8069125754e0a5cb3de46fa71d0a79b3e2c2018ddcc6d8f0367b7d4e1d9"
	I1123 08:45:12.304557  331119 cri.go:89] found id: ""
	I1123 08:45:12.304613  331119 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:45:12.319514  331119 retry.go:31] will retry after 330.585243ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:12Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:45:12.650882  331119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:12.668057  331119 pause.go:52] kubelet running: false
	I1123 08:45:12.668166  331119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:45:12.890207  331119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:45:12.890283  331119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:45:12.987944  331119 cri.go:89] found id: "cb4fd533dc80ea4296c3272defe70f1c36f6c1819a3a8d39ce2cd4d9e3af9f17"
	I1123 08:45:12.987969  331119 cri.go:89] found id: "833de90f7dd18d80eab0ca9aa9103b5aa80cc42b8b7287f8b42b5a3b32e0adeb"
	I1123 08:45:12.987977  331119 cri.go:89] found id: "45c3f69cfbb9e95b89ecc13be97e72337469a5dde7d9dafd2d7eb683d2e480a3"
	I1123 08:45:12.987983  331119 cri.go:89] found id: "39e55cc565f8340fb7399995b588a6585102abab97cf96c43e1cd271099cb02d"
	I1123 08:45:12.987987  331119 cri.go:89] found id: "b85f36938e98155acb198f46eeda831f2f859afb475d32fe72dec1a0e6723666"
	I1123 08:45:12.987992  331119 cri.go:89] found id: "35f8086b1de4e31006310dbc9225c47fc7ce015e3238258161e81fc2d1c7f4bd"
	I1123 08:45:12.987997  331119 cri.go:89] found id: "62bca8b239fd282ce38b86b21b9897cfdd1cd66996c68c577fb4d9a16baca0f8"
	I1123 08:45:12.988001  331119 cri.go:89] found id: "46e574a85cdd50d2ed3dfea9bf9e72260185653dd7313da97ccc3c575be7c1e6"
	I1123 08:45:12.988006  331119 cri.go:89] found id: "5ed59b21f5fe5a105c3165b1f30786d03b6ba7fda1e27532fd0541a8a4b0df67"
	I1123 08:45:12.988014  331119 cri.go:89] found id: "0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378"
	I1123 08:45:12.988019  331119 cri.go:89] found id: "ca13a8069125754e0a5cb3de46fa71d0a79b3e2c2018ddcc6d8f0367b7d4e1d9"
	I1123 08:45:12.988023  331119 cri.go:89] found id: ""
	I1123 08:45:12.988068  331119 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:45:13.004368  331119 retry.go:31] will retry after 317.160176ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:13Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:45:13.321823  331119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:13.335504  331119 pause.go:52] kubelet running: false
	I1123 08:45:13.335561  331119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:45:13.521486  331119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:45:13.521590  331119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:45:13.612616  331119 cri.go:89] found id: "cb4fd533dc80ea4296c3272defe70f1c36f6c1819a3a8d39ce2cd4d9e3af9f17"
	I1123 08:45:13.612640  331119 cri.go:89] found id: "833de90f7dd18d80eab0ca9aa9103b5aa80cc42b8b7287f8b42b5a3b32e0adeb"
	I1123 08:45:13.612647  331119 cri.go:89] found id: "45c3f69cfbb9e95b89ecc13be97e72337469a5dde7d9dafd2d7eb683d2e480a3"
	I1123 08:45:13.612651  331119 cri.go:89] found id: "39e55cc565f8340fb7399995b588a6585102abab97cf96c43e1cd271099cb02d"
	I1123 08:45:13.612656  331119 cri.go:89] found id: "b85f36938e98155acb198f46eeda831f2f859afb475d32fe72dec1a0e6723666"
	I1123 08:45:13.612662  331119 cri.go:89] found id: "35f8086b1de4e31006310dbc9225c47fc7ce015e3238258161e81fc2d1c7f4bd"
	I1123 08:45:13.612666  331119 cri.go:89] found id: "62bca8b239fd282ce38b86b21b9897cfdd1cd66996c68c577fb4d9a16baca0f8"
	I1123 08:45:13.612670  331119 cri.go:89] found id: "46e574a85cdd50d2ed3dfea9bf9e72260185653dd7313da97ccc3c575be7c1e6"
	I1123 08:45:13.612674  331119 cri.go:89] found id: "5ed59b21f5fe5a105c3165b1f30786d03b6ba7fda1e27532fd0541a8a4b0df67"
	I1123 08:45:13.612681  331119 cri.go:89] found id: "0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378"
	I1123 08:45:13.612716  331119 cri.go:89] found id: "ca13a8069125754e0a5cb3de46fa71d0a79b3e2c2018ddcc6d8f0367b7d4e1d9"
	I1123 08:45:13.612726  331119 cri.go:89] found id: ""
	I1123 08:45:13.612778  331119 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:45:13.625620  331119 out.go:203] 
	W1123 08:45:13.626643  331119 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:45:13.626660  331119 out.go:285] * 
	* 
	W1123 08:45:13.630905  331119 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:45:13.632037  331119 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-057894 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-057894
helpers_test.go:243: (dbg) docker inspect old-k8s-version-057894:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007",
	        "Created": "2025-11-23T08:42:58.872833839Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 314829,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:44:16.682731554Z",
	            "FinishedAt": "2025-11-23T08:44:15.70584521Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/hostname",
	        "HostsPath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/hosts",
	        "LogPath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007-json.log",
	        "Name": "/old-k8s-version-057894",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-057894:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-057894",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007",
	                "LowerDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-057894",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-057894/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-057894",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-057894",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-057894",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "58ddf290da227e1a18ddd02874d933b1bdd28c78e5e2981a6be9cdb1b753310c",
	            "SandboxKey": "/var/run/docker/netns/58ddf290da22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-057894": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c80b7bca17a7fb714fa079981c2a6d3c533cb55d656f0653a2df50f0ca949782",
	                    "EndpointID": "9fa587d3e24f2bddb2d715af79b1f1b70a016111fd39f47ace80fb5ff4a32772",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ba:8c:78:99:5e:f8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-057894",
	                        "521ae9646520"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-057894 -n old-k8s-version-057894
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-057894 -n old-k8s-version-057894: exit status 2 (329.331771ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-057894 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-057894 logs -n 25: (1.465736812s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p bridge-351793                                                                                                                                                                                                                              │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ stop    │ -p old-k8s-version-057894 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-057894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p old-k8s-version-057894 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-653361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p newest-cni-653361 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-726261 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p no-preload-187607 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p newest-cni-653361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-726261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-187607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ image   │ newest-cni-653361 image list --format=json                                                                                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ pause   │ -p newest-cni-653361 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-177890                                                                                                                                                                                                               │ disable-driver-mounts-177890 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ image   │ old-k8s-version-057894 image list --format=json                                                                                                                                                                                               │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-057894 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:45:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:45:01.745123  329090 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:01.745432  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745440  329090 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:01.745446  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745739  329090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:45:01.746375  329090 out.go:368] Setting JSON to false
	I1123 08:45:01.748064  329090 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5249,"bootTime":1763882253,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:45:01.748157  329090 start.go:143] virtualization: kvm guest
	I1123 08:45:01.750156  329090 out.go:179] * [embed-certs-756339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:45:01.753393  329090 notify.go:221] Checking for updates...
	I1123 08:45:01.753398  329090 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:45:01.755146  329090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:45:01.756598  329090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:01.757836  329090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:45:01.758954  329090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:45:01.760360  329090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:45:01.765276  329090 config.go:182] Loaded profile config "default-k8s-diff-port-726261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765522  329090 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765681  329090 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:45:01.765827  329090 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:45:01.800644  329090 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:45:01.801313  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.871017  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.860213573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.871190  329090 docker.go:319] overlay module found
	I1123 08:45:01.872879  329090 out.go:179] * Using the docker driver based on user configuration
	I1123 08:45:01.874146  329090 start.go:309] selected driver: docker
	I1123 08:45:01.874172  329090 start.go:927] validating driver "docker" against <nil>
	I1123 08:45:01.874185  329090 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:45:01.874731  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.950283  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.938442114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.950526  329090 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:45:01.950805  329090 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.952251  329090 out.go:179] * Using Docker driver with root privileges
	I1123 08:45:01.953421  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:01.953493  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:01.953508  329090 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:45:01.953584  329090 start.go:353] cluster config:
	{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:01.954827  329090 out.go:179] * Starting "embed-certs-756339" primary control-plane node in "embed-certs-756339" cluster
	I1123 08:45:01.955848  329090 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:45:01.957107  329090 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:45:01.958365  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:01.958393  329090 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:45:01.958408  329090 cache.go:65] Caching tarball of preloaded images
	I1123 08:45:01.958465  329090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:45:01.958507  329090 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:45:01.958523  329090 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:45:01.958635  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:01.958661  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json: {Name:mk2bf238bbe57398e8f0e67e0ff345b4c996e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:01.983475  329090 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:45:01.983497  329090 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:45:01.983513  329090 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:45:01.983540  329090 start.go:360] acquireMachinesLock for embed-certs-756339: {Name:mk2607c5ea38ca6bd330e0a548b36202f67f84a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:01.983642  329090 start.go:364] duration metric: took 84.653µs to acquireMachinesLock for "embed-certs-756339"
	I1123 08:45:01.983672  329090 start.go:93] Provisioning new machine with config: &{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:01.983792  329090 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:45:01.986901  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.692445857s)
	I1123 08:45:01.987002  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.670756175s)
	I1123 08:45:01.987136  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.507320621s)
	I1123 08:45:01.987186  323816 api_server.go:72] duration metric: took 2.902108336s to wait for apiserver process to appear ...
	I1123 08:45:01.987204  323816 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:01.987282  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:01.988808  323816 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-187607 addons enable metrics-server
	
	I1123 08:45:01.992707  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:45:01.992732  323816 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:45:01.994529  323816 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 08:45:01.422757  323135 addons.go:530] duration metric: took 3.555416147s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:01.910007  323135 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:45:01.915784  323135 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:45:01.917062  323135 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:01.917089  323135 api_server.go:131] duration metric: took 507.92158ms to wait for apiserver health ...
	I1123 08:45:01.917100  323135 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:01.921785  323135 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:01.921998  323135 system_pods.go:61] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.922039  323135 system_pods.go:61] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.922068  323135 system_pods.go:61] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.922079  323135 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.922087  323135 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.922095  323135 system_pods.go:61] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.922107  323135 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.922115  323135 system_pods.go:61] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.922124  323135 system_pods.go:74] duration metric: took 5.016936ms to wait for pod list to return data ...
	I1123 08:45:01.922189  323135 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:01.925409  323135 default_sa.go:45] found service account: "default"
	I1123 08:45:01.925452  323135 default_sa.go:55] duration metric: took 3.245595ms for default service account to be created ...
	I1123 08:45:01.925463  323135 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:01.931804  323135 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:01.931872  323135 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.931898  323135 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.931961  323135 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.931995  323135 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.932018  323135 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.932037  323135 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.932066  323135 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.932076  323135 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.932086  323135 system_pods.go:126] duration metric: took 6.61665ms to wait for k8s-apps to be running ...
	I1123 08:45:01.932097  323135 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:01.932143  323135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:01.947263  323135 system_svc.go:56] duration metric: took 15.160659ms WaitForService to wait for kubelet
	I1123 08:45:01.947298  323135 kubeadm.go:587] duration metric: took 4.08017724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.947325  323135 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:01.950481  323135 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:01.950509  323135 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:01.950526  323135 node_conditions.go:105] duration metric: took 3.194245ms to run NodePressure ...
	I1123 08:45:01.950541  323135 start.go:242] waiting for startup goroutines ...
	I1123 08:45:01.950555  323135 start.go:247] waiting for cluster config update ...
	I1123 08:45:01.950571  323135 start.go:256] writing updated cluster config ...
	I1123 08:45:01.950876  323135 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:01.955038  323135 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:01.958449  323135 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:03.965246  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:01.995584  323816 addons.go:530] duration metric: took 2.910424664s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:02.487321  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:02.491678  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:45:02.492738  323816 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:02.492762  323816 api_server.go:131] duration metric: took 505.498506ms to wait for apiserver health ...
	I1123 08:45:02.492770  323816 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:02.496254  323816 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:02.496282  323816 system_pods.go:61] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.496290  323816 system_pods.go:61] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.496296  323816 system_pods.go:61] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.496302  323816 system_pods.go:61] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.496310  323816 system_pods.go:61] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.496317  323816 system_pods.go:61] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.496324  323816 system_pods.go:61] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.496334  323816 system_pods.go:61] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.496340  323816 system_pods.go:74] duration metric: took 3.565076ms to wait for pod list to return data ...
	I1123 08:45:02.496348  323816 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:02.498409  323816 default_sa.go:45] found service account: "default"
	I1123 08:45:02.498426  323816 default_sa.go:55] duration metric: took 2.073405ms for default service account to be created ...
	I1123 08:45:02.498434  323816 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:02.500853  323816 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.500888  323816 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.500899  323816 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.500912  323816 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.500929  323816 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.500941  323816 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.500951  323816 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.500961  323816 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.500971  323816 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.500978  323816 system_pods.go:126] duration metric: took 2.538671ms to wait for k8s-apps to be running ...
	I1123 08:45:02.500991  323816 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:02.501036  323816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:02.522199  323816 system_svc.go:56] duration metric: took 21.201972ms WaitForService to wait for kubelet
	I1123 08:45:02.522225  323816 kubeadm.go:587] duration metric: took 3.437147085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:02.522246  323816 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:02.524870  323816 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:02.524905  323816 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:02.524925  323816 node_conditions.go:105] duration metric: took 2.673388ms to run NodePressure ...
	I1123 08:45:02.524943  323816 start.go:242] waiting for startup goroutines ...
	I1123 08:45:02.524953  323816 start.go:247] waiting for cluster config update ...
	I1123 08:45:02.524970  323816 start.go:256] writing updated cluster config ...
	I1123 08:45:02.525241  323816 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:02.529440  323816 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:02.532956  323816 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:04.545550  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:01.985817  329090 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:45:01.986054  329090 start.go:159] libmachine.API.Create for "embed-certs-756339" (driver="docker")
	I1123 08:45:01.986094  329090 client.go:173] LocalClient.Create starting
	I1123 08:45:01.986158  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 08:45:01.986202  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986228  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986299  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 08:45:01.986331  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986349  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986747  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:45:02.006351  329090 cli_runner.go:211] docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:45:02.006428  329090 network_create.go:284] running [docker network inspect embed-certs-756339] to gather additional debugging logs...
	I1123 08:45:02.006453  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339
	W1123 08:45:02.024029  329090 cli_runner.go:211] docker network inspect embed-certs-756339 returned with exit code 1
	I1123 08:45:02.024056  329090 network_create.go:287] error running [docker network inspect embed-certs-756339]: docker network inspect embed-certs-756339: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-756339 not found
	I1123 08:45:02.024076  329090 network_create.go:289] output of [docker network inspect embed-certs-756339]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-756339 not found
	
	** /stderr **
	I1123 08:45:02.024188  329090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:02.041589  329090 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0e05b954e81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:02:f0:06:d5:34} reservation:<nil>}
	I1123 08:45:02.042147  329090 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2604e536ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:ab:00:4e:41:e6} reservation:<nil>}
	I1123 08:45:02.042884  329090 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce97320dd675 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:5a:a5:0b:c0:b0} reservation:<nil>}
	I1123 08:45:02.043340  329090 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c80b7bca17a7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:f1:41:59:09:b5} reservation:<nil>}
	I1123 08:45:02.043937  329090 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8e58961f3024 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b6:f0:e4:3c:63:d5} reservation:<nil>}
	I1123 08:45:02.044437  329090 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-e4a86ee726da IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ae:37:bc:fe:9d:3a} reservation:<nil>}
	I1123 08:45:02.045221  329090 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06cd0}
	I1123 08:45:02.045242  329090 network_create.go:124] attempt to create docker network embed-certs-756339 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1123 08:45:02.045287  329090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-756339 embed-certs-756339
	I1123 08:45:02.095267  329090 network_create.go:108] docker network embed-certs-756339 192.168.103.0/24 created
	I1123 08:45:02.095296  329090 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-756339" container
	I1123 08:45:02.095350  329090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:45:02.111533  329090 cli_runner.go:164] Run: docker volume create embed-certs-756339 --label name.minikube.sigs.k8s.io=embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:45:02.128824  329090 oci.go:103] Successfully created a docker volume embed-certs-756339
	I1123 08:45:02.128896  329090 cli_runner.go:164] Run: docker run --rm --name embed-certs-756339-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --entrypoint /usr/bin/test -v embed-certs-756339:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:45:02.559029  329090 oci.go:107] Successfully prepared a docker volume embed-certs-756339
	I1123 08:45:02.559098  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:02.559108  329090 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:45:02.559163  329090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 08:45:06.464312  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:08.466215  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:06.707246  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:09.040137  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:11.046122  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:07.131448  329090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.572224972s)
	I1123 08:45:07.131484  329090 kic.go:203] duration metric: took 4.572370498s to extract preloaded images to volume ...
	W1123 08:45:07.131573  329090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:45:07.131616  329090 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:45:07.131860  329090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:45:07.219659  329090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-756339 --name embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-756339 --network embed-certs-756339 --ip 192.168.103.2 --volume embed-certs-756339:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:45:07.635482  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Running}}
	I1123 08:45:07.658965  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.681327  329090 cli_runner.go:164] Run: docker exec embed-certs-756339 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:45:07.737769  329090 oci.go:144] the created container "embed-certs-756339" has a running status.
	I1123 08:45:07.737802  329090 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa...
	I1123 08:45:07.895228  329090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:45:07.935222  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.958382  329090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:45:07.958405  329090 kic_runner.go:114] Args: [docker exec --privileged embed-certs-756339 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:45:08.015520  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:08.039803  329090 machine.go:94] provisionDockerMachine start ...
	I1123 08:45:08.039898  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:08.064345  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:08.064680  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:08.064723  329090 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:45:08.065347  329090 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47890->127.0.0.1:33131: read: connection reset by peer
	I1123 08:45:11.244730  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.244755  329090 ubuntu.go:182] provisioning hostname "embed-certs-756339"
	I1123 08:45:11.244812  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.273763  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.274055  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.274072  329090 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-756339 && echo "embed-certs-756339" | sudo tee /etc/hostname
	I1123 08:45:11.457570  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.457714  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.488146  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.488457  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.488485  329090 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-756339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-756339/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-756339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:45:11.660198  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:45:11.660362  329090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:45:11.660453  329090 ubuntu.go:190] setting up certificates
	I1123 08:45:11.660471  329090 provision.go:84] configureAuth start
	I1123 08:45:11.661011  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:11.684982  329090 provision.go:143] copyHostCerts
	I1123 08:45:11.685043  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:45:11.685053  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:45:11.685140  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:45:11.685249  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:45:11.685255  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:45:11.685292  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:45:11.685383  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:45:11.685391  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:45:11.685427  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:45:11.685506  329090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.embed-certs-756339 san=[127.0.0.1 192.168.103.2 embed-certs-756339 localhost minikube]
	
	
	==> CRI-O <==
	Nov 23 08:44:45 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:45.134491151Z" level=info msg="Started container" PID=1758 containerID=b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper id=694058b3-3acd-4161-9bc1-2166b6c20e5b name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f676acb5cbcb073b2e7fa8ecad890abc414ca394eadc5279f65a143c4fa071a
	Nov 23 08:44:46 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:46.089901364Z" level=info msg="Removing container: 0e984beb996d7e790f8a5603b3d2ea53a5721b1c7e305fb299ac9e57bbeb64d1" id=d422007e-6a1b-485d-925c-ea31e89dfba3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:44:46 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:46.098846639Z" level=info msg="Removed container 0e984beb996d7e790f8a5603b3d2ea53a5721b1c7e305fb299ac9e57bbeb64d1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper" id=d422007e-6a1b-485d-925c-ea31e89dfba3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.120493391Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a1695869-061e-4ce6-9f90-2ab7667c2ec7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.121768659Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=267b1280-3a4d-466a-b592-c0d6734a51e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.122804081Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2a169f96-e2c0-4b49-9529-d1ffeceff8f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.122950581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.127672379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.12788024Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b7cf72c7a7241c2e9054335546664e633f53e6b3740bb66500bc829961b9aff5/merged/etc/passwd: no such file or directory"
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.127917277Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b7cf72c7a7241c2e9054335546664e633f53e6b3740bb66500bc829961b9aff5/merged/etc/group: no such file or directory"
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.128204852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.161871764Z" level=info msg="Created container cb4fd533dc80ea4296c3272defe70f1c36f6c1819a3a8d39ce2cd4d9e3af9f17: kube-system/storage-provisioner/storage-provisioner" id=2a169f96-e2c0-4b49-9529-d1ffeceff8f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.162527968Z" level=info msg="Starting container: cb4fd533dc80ea4296c3272defe70f1c36f6c1819a3a8d39ce2cd4d9e3af9f17" id=091529a9-fb73-402a-aa2c-76c92de1ad8e name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.164496572Z" level=info msg="Started container" PID=1773 containerID=cb4fd533dc80ea4296c3272defe70f1c36f6c1819a3a8d39ce2cd4d9e3af9f17 description=kube-system/storage-provisioner/storage-provisioner id=091529a9-fb73-402a-aa2c-76c92de1ad8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8e0c8f33e1add530d7dca23a4dd9eff53ad6bc54071c33a141f8cedc5514f0f
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.021302841Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c18bc1e6-18d4-4b06-881a-d275a2e7d170 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.02220561Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2f8bd974-cc58-44bd-8afa-aa4021437268 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.023169713Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper" id=94d1ae87-5a30-4625-9a6c-4a0e9f1a594f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.023318072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.032351039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.033061284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.070567832Z" level=info msg="Created container 0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper" id=94d1ae87-5a30-4625-9a6c-4a0e9f1a594f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.071245941Z" level=info msg="Starting container: 0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378" id=946ce8bd-d157-4bc4-aa63-fa4da1c9849c name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.075870131Z" level=info msg="Started container" PID=1807 containerID=0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper id=946ce8bd-d157-4bc4-aa63-fa4da1c9849c name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f676acb5cbcb073b2e7fa8ecad890abc414ca394eadc5279f65a143c4fa071a
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.150011204Z" level=info msg="Removing container: b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73" id=954f33a3-ee1f-4b63-8225-19b0f7cd7b39 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.161126151Z" level=info msg="Removed container b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper" id=954f33a3-ee1f-4b63-8225-19b0f7cd7b39 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	0ec92733e79bc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   1f676acb5cbcb       dashboard-metrics-scraper-5f989dc9cf-f6dfq       kubernetes-dashboard
	cb4fd533dc80e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   e8e0c8f33e1ad       storage-provisioner                              kube-system
	ca13a80691257       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   32 seconds ago      Running             kubernetes-dashboard        0                   bacaaf44cb1bb       kubernetes-dashboard-8694d4445c-rlnf7            kubernetes-dashboard
	833de90f7dd18       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           48 seconds ago      Running             coredns                     0                   cd18df29f8fef       coredns-5dd5756b68-t8zg8                         kube-system
	b4e4286b6cafb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   8a5fa6d8d4b01       busybox                                          default
	45c3f69cfbb9e       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           48 seconds ago      Running             kube-proxy                  0                   92f8107e20931       kube-proxy-6t2mg                                 kube-system
	39e55cc565f83       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   e8e0c8f33e1ad       storage-provisioner                              kube-system
	b85f36938e981       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   a7202dba32d00       kindnet-lwhjw                                    kube-system
	35f8086b1de4e       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           51 seconds ago      Running             kube-scheduler              0                   c10d531ca7aa8       kube-scheduler-old-k8s-version-057894            kube-system
	62bca8b239fd2       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           51 seconds ago      Running             kube-controller-manager     0                   886bb088bdbfd       kube-controller-manager-old-k8s-version-057894   kube-system
	46e574a85cdd5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           51 seconds ago      Running             etcd                        0                   60020468c9772       etcd-old-k8s-version-057894                      kube-system
	5ed59b21f5fe5       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           51 seconds ago      Running             kube-apiserver              0                   012b03dc57916       kube-apiserver-old-k8s-version-057894            kube-system
	
	
	==> coredns [833de90f7dd18d80eab0ca9aa9103b5aa80cc42b8b7287f8b42b5a3b32e0adeb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60829 - 44341 "HINFO IN 4128481511327025327.5652468065265006647. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.472080843s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-057894
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-057894
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=old-k8s-version-057894
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_43_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:43:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-057894
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:45:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:44:56 +0000   Sun, 23 Nov 2025 08:43:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:44:56 +0000   Sun, 23 Nov 2025 08:43:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:44:56 +0000   Sun, 23 Nov 2025 08:43:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:44:56 +0000   Sun, 23 Nov 2025 08:43:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-057894
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c7ef2a9c-d9fc-4762-980c-1ef217fcf6e1
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-t8zg8                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-057894                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-lwhjw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-057894             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-057894    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-6t2mg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-057894             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f6dfq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-rlnf7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-057894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-057894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node old-k8s-version-057894 event: Registered Node old-k8s-version-057894 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-057894 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)    kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)    kubelet          Node old-k8s-version-057894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)    kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                  node-controller  Node old-k8s-version-057894 event: Registered Node old-k8s-version-057894 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [46e574a85cdd50d2ed3dfea9bf9e72260185653dd7313da97ccc3c575be7c1e6] <==
	{"level":"info","ts":"2025-11-23T08:44:23.584293Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T08:44:23.584425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-23T08:44:23.585079Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:44:23.585492Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:44:23.5856Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:44:23.591276Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T08:44:23.5918Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T08:44:23.592059Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T08:44:23.592786Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T08:44:23.592841Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:44:24.773942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T08:44:24.773997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:44:24.774032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T08:44:24.774051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T08:44:24.77406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T08:44:24.774071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-23T08:44:24.774081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T08:44:24.775387Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-057894 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:44:24.775391Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:44:24.775432Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:44:24.77602Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:44:24.776111Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T08:44:24.776893Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:44:24.777928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T08:45:06.950679Z","caller":"traceutil/trace.go:171","msg":"trace[714815749] transaction","detail":"{read_only:false; response_revision:653; number_of_response:1; }","duration":"118.741314ms","start":"2025-11-23T08:45:06.831916Z","end":"2025-11-23T08:45:06.950658Z","steps":["trace[714815749] 'process raft request'  (duration: 118.613987ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:45:15 up  1:27,  0 user,  load average: 5.94, 4.08, 2.50
	Linux old-k8s-version-057894 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b85f36938e98155acb198f46eeda831f2f859afb475d32fe72dec1a0e6723666] <==
	I1123 08:44:26.638894       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:26.639138       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:44:26.639316       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:26.639338       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:26.639364       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:26.995078       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:26.995128       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:26.995150       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:26.995304       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:44:27.495485       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:44:27.495522       1 metrics.go:72] Registering metrics
	I1123 08:44:27.495587       1 controller.go:711] "Syncing nftables rules"
	I1123 08:44:36.935487       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:44:36.935563       1 main.go:301] handling current node
	I1123 08:44:46.935648       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:44:46.935680       1 main.go:301] handling current node
	I1123 08:44:56.935864       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:44:56.935901       1 main.go:301] handling current node
	I1123 08:45:06.935424       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:45:06.935458       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5ed59b21f5fe5a105c3165b1f30786d03b6ba7fda1e27532fd0541a8a4b0df67] <==
	I1123 08:44:25.875653       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1123 08:44:25.948605       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:44:25.975485       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 08:44:25.975537       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 08:44:25.975777       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:44:25.976890       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 08:44:25.977489       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 08:44:25.978207       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:44:25.979131       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 08:44:25.979638       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:44:25.979652       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:44:25.979659       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:44:25.979667       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:44:26.017877       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 08:44:26.879671       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:44:27.077419       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:44:27.108599       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:44:27.124714       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:44:27.132385       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:44:27.138755       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:44:27.174315       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.61.250"}
	I1123 08:44:27.184213       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.111.205"}
	I1123 08:44:38.154842       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 08:44:38.182340       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:44:38.221663       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [62bca8b239fd282ce38b86b21b9897cfdd1cd66996c68c577fb4d9a16baca0f8] <==
	I1123 08:44:38.244152       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-f6dfq"
	I1123 08:44:38.245229       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:44:38.247677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.799151ms"
	I1123 08:44:38.252166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="22.253351ms"
	I1123 08:44:38.253984       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.249682ms"
	I1123 08:44:38.254081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.878µs"
	I1123 08:44:38.257160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="4.954896ms"
	I1123 08:44:38.257234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.799µs"
	I1123 08:44:38.258930       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="37.094µs"
	I1123 08:44:38.259758       1 shared_informer.go:318] Caches are synced for stateful set
	I1123 08:44:38.266165       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.747µs"
	I1123 08:44:38.300794       1 shared_informer.go:318] Caches are synced for daemon sets
	I1123 08:44:38.341395       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:44:38.658989       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:44:38.680351       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:44:38.680375       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:44:43.096603       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.099337ms"
	I1123 08:44:43.097716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.848µs"
	I1123 08:44:45.098171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.109µs"
	I1123 08:44:46.098806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.342µs"
	I1123 08:44:47.102818       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.959µs"
	I1123 08:44:57.869821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.198633ms"
	I1123 08:44:57.870081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.95µs"
	I1123 08:45:03.162369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="148.072µs"
	I1123 08:45:08.565287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.161µs"
	
	
	==> kube-proxy [45c3f69cfbb9e95b89ecc13be97e72337469a5dde7d9dafd2d7eb683d2e480a3] <==
	I1123 08:44:26.488357       1 server_others.go:69] "Using iptables proxy"
	I1123 08:44:26.501270       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 08:44:26.537817       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:26.544945       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:44:26.544989       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:44:26.545001       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:44:26.545049       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:44:26.545812       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:44:26.545877       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:26.552934       1 config.go:188] "Starting service config controller"
	I1123 08:44:26.552952       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:44:26.552981       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:44:26.552992       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:44:26.553561       1 config.go:315] "Starting node config controller"
	I1123 08:44:26.553569       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:44:26.653698       1 shared_informer.go:318] Caches are synced for node config
	I1123 08:44:26.653708       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 08:44:26.653716       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [35f8086b1de4e31006310dbc9225c47fc7ce015e3238258161e81fc2d1c7f4bd] <==
	I1123 08:44:24.019647       1 serving.go:348] Generated self-signed cert in-memory
	W1123 08:44:25.933317       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 08:44:25.933362       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:44:25.933392       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 08:44:25.933403       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 08:44:25.955559       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 08:44:25.955588       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:25.958187       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:25.958232       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 08:44:25.959404       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1123 08:44:25.960002       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 08:44:26.059325       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:44:38 old-k8s-version-057894 kubelet[730]: I1123 08:44:38.250571     730 topology_manager.go:215] "Topology Admit Handler" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-f6dfq"
	Nov 23 08:44:38 old-k8s-version-057894 kubelet[730]: I1123 08:44:38.302490     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgqqm\" (UniqueName: \"kubernetes.io/projected/f0454ce9-5b09-4574-a70a-0566e31c41b2-kube-api-access-fgqqm\") pod \"dashboard-metrics-scraper-5f989dc9cf-f6dfq\" (UID: \"f0454ce9-5b09-4574-a70a-0566e31c41b2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq"
	Nov 23 08:44:38 old-k8s-version-057894 kubelet[730]: I1123 08:44:38.302550     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0171abf9-abe8-4871-8715-2ece3d41ce1a-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-rlnf7\" (UID: \"0171abf9-abe8-4871-8715-2ece3d41ce1a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-rlnf7"
	Nov 23 08:44:38 old-k8s-version-057894 kubelet[730]: I1123 08:44:38.302579     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f0454ce9-5b09-4574-a70a-0566e31c41b2-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-f6dfq\" (UID: \"f0454ce9-5b09-4574-a70a-0566e31c41b2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq"
	Nov 23 08:44:38 old-k8s-version-057894 kubelet[730]: I1123 08:44:38.302664     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkftz\" (UniqueName: \"kubernetes.io/projected/0171abf9-abe8-4871-8715-2ece3d41ce1a-kube-api-access-kkftz\") pod \"kubernetes-dashboard-8694d4445c-rlnf7\" (UID: \"0171abf9-abe8-4871-8715-2ece3d41ce1a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-rlnf7"
	Nov 23 08:44:45 old-k8s-version-057894 kubelet[730]: I1123 08:44:45.084057     730 scope.go:117] "RemoveContainer" containerID="0e984beb996d7e790f8a5603b3d2ea53a5721b1c7e305fb299ac9e57bbeb64d1"
	Nov 23 08:44:45 old-k8s-version-057894 kubelet[730]: I1123 08:44:45.098399     730 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-rlnf7" podStartSLOduration=3.262375916 podCreationTimestamp="2025-11-23 08:44:38 +0000 UTC" firstStartedPulling="2025-11-23 08:44:38.568750122 +0000 UTC m=+15.639160804" lastFinishedPulling="2025-11-23 08:44:42.404713931 +0000 UTC m=+19.475124614" observedRunningTime="2025-11-23 08:44:43.090267413 +0000 UTC m=+20.160678108" watchObservedRunningTime="2025-11-23 08:44:45.098339726 +0000 UTC m=+22.168750420"
	Nov 23 08:44:46 old-k8s-version-057894 kubelet[730]: I1123 08:44:46.088573     730 scope.go:117] "RemoveContainer" containerID="0e984beb996d7e790f8a5603b3d2ea53a5721b1c7e305fb299ac9e57bbeb64d1"
	Nov 23 08:44:46 old-k8s-version-057894 kubelet[730]: I1123 08:44:46.088824     730 scope.go:117] "RemoveContainer" containerID="b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73"
	Nov 23 08:44:46 old-k8s-version-057894 kubelet[730]: E1123 08:44:46.089195     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f6dfq_kubernetes-dashboard(f0454ce9-5b09-4574-a70a-0566e31c41b2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2"
	Nov 23 08:44:47 old-k8s-version-057894 kubelet[730]: I1123 08:44:47.092964     730 scope.go:117] "RemoveContainer" containerID="b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73"
	Nov 23 08:44:47 old-k8s-version-057894 kubelet[730]: E1123 08:44:47.093272     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f6dfq_kubernetes-dashboard(f0454ce9-5b09-4574-a70a-0566e31c41b2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2"
	Nov 23 08:44:48 old-k8s-version-057894 kubelet[730]: I1123 08:44:48.552747     730 scope.go:117] "RemoveContainer" containerID="b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73"
	Nov 23 08:44:48 old-k8s-version-057894 kubelet[730]: E1123 08:44:48.553005     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f6dfq_kubernetes-dashboard(f0454ce9-5b09-4574-a70a-0566e31c41b2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2"
	Nov 23 08:44:57 old-k8s-version-057894 kubelet[730]: I1123 08:44:57.119924     730 scope.go:117] "RemoveContainer" containerID="39e55cc565f8340fb7399995b588a6585102abab97cf96c43e1cd271099cb02d"
	Nov 23 08:45:03 old-k8s-version-057894 kubelet[730]: I1123 08:45:03.020623     730 scope.go:117] "RemoveContainer" containerID="b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73"
	Nov 23 08:45:03 old-k8s-version-057894 kubelet[730]: I1123 08:45:03.148680     730 scope.go:117] "RemoveContainer" containerID="b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73"
	Nov 23 08:45:03 old-k8s-version-057894 kubelet[730]: I1123 08:45:03.148883     730 scope.go:117] "RemoveContainer" containerID="0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378"
	Nov 23 08:45:03 old-k8s-version-057894 kubelet[730]: E1123 08:45:03.149270     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f6dfq_kubernetes-dashboard(f0454ce9-5b09-4574-a70a-0566e31c41b2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2"
	Nov 23 08:45:08 old-k8s-version-057894 kubelet[730]: I1123 08:45:08.552443     730 scope.go:117] "RemoveContainer" containerID="0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378"
	Nov 23 08:45:08 old-k8s-version-057894 kubelet[730]: E1123 08:45:08.553242     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f6dfq_kubernetes-dashboard(f0454ce9-5b09-4574-a70a-0566e31c41b2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2"
	Nov 23 08:45:12 old-k8s-version-057894 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:45:12 old-k8s-version-057894 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:45:12 old-k8s-version-057894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 08:45:12 old-k8s-version-057894 systemd[1]: kubelet.service: Consumed 1.428s CPU time.
	
	
	==> kubernetes-dashboard [ca13a8069125754e0a5cb3de46fa71d0a79b3e2c2018ddcc6d8f0367b7d4e1d9] <==
	2025/11/23 08:44:42 Using namespace: kubernetes-dashboard
	2025/11/23 08:44:42 Using in-cluster config to connect to apiserver
	2025/11/23 08:44:42 Using secret token for csrf signing
	2025/11/23 08:44:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:44:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:44:42 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 08:44:42 Generating JWE encryption key
	2025/11/23 08:44:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:44:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:44:42 Initializing JWE encryption key from synchronized object
	2025/11/23 08:44:42 Creating in-cluster Sidecar client
	2025/11/23 08:44:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:44:42 Serving insecurely on HTTP port: 9090
	2025/11/23 08:45:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:44:42 Starting overwatch
	
	
	==> storage-provisioner [39e55cc565f8340fb7399995b588a6585102abab97cf96c43e1cd271099cb02d] <==
	I1123 08:44:26.429112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:44:56.431261       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cb4fd533dc80ea4296c3272defe70f1c36f6c1819a3a8d39ce2cd4d9e3af9f17] <==
	I1123 08:44:57.178372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:44:57.188092       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:44:57.188144       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:45:14.654237       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:14.654385       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18c4da37-3156-4c26-a03d-1ad0569c542a", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-057894_5f629f25-40d8-4b16-9995-37043732de80 became leader
	I1123 08:45:14.654439       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-057894_5f629f25-40d8-4b16-9995-37043732de80!
	I1123 08:45:14.755748       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-057894_5f629f25-40d8-4b16-9995-37043732de80!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-057894 -n old-k8s-version-057894
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-057894 -n old-k8s-version-057894: exit status 2 (330.393769ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-057894 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-057894
helpers_test.go:243: (dbg) docker inspect old-k8s-version-057894:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007",
	        "Created": "2025-11-23T08:42:58.872833839Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 314829,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:44:16.682731554Z",
	            "FinishedAt": "2025-11-23T08:44:15.70584521Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/hostname",
	        "HostsPath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/hosts",
	        "LogPath": "/var/lib/docker/containers/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007/521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007-json.log",
	        "Name": "/old-k8s-version-057894",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-057894:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-057894",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "521ae9646520210e3010e96d66502fcad390fd15c86fe72d0f71a2e9a3d86007",
	                "LowerDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ffb0cec675e0a39303310e0fd9ab0744254650338cae48fc18e016f47a39b855/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-057894",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-057894/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-057894",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-057894",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-057894",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "58ddf290da227e1a18ddd02874d933b1bdd28c78e5e2981a6be9cdb1b753310c",
	            "SandboxKey": "/var/run/docker/netns/58ddf290da22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-057894": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c80b7bca17a7fb714fa079981c2a6d3c533cb55d656f0653a2df50f0ca949782",
	                    "EndpointID": "9fa587d3e24f2bddb2d715af79b1f1b70a016111fd39f47ace80fb5ff4a32772",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ba:8c:78:99:5e:f8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-057894",
	                        "521ae9646520"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-057894 -n old-k8s-version-057894
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-057894 -n old-k8s-version-057894: exit status 2 (340.453196ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-057894 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-057894 logs -n 25: (1.074695167s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p bridge-351793                                                                                                                                                                                                                              │ bridge-351793                │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:43 UTC │
	│ stop    │ -p old-k8s-version-057894 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:43 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-057894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p old-k8s-version-057894 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-653361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p newest-cni-653361 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-726261 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p no-preload-187607 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p newest-cni-653361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-726261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-187607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ image   │ newest-cni-653361 image list --format=json                                                                                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ pause   │ -p newest-cni-653361 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-177890                                                                                                                                                                                                               │ disable-driver-mounts-177890 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ image   │ old-k8s-version-057894 image list --format=json                                                                                                                                                                                               │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-057894 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:45:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:45:01.745123  329090 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:01.745432  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745440  329090 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:01.745446  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745739  329090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:45:01.746375  329090 out.go:368] Setting JSON to false
	I1123 08:45:01.748064  329090 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5249,"bootTime":1763882253,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:45:01.748157  329090 start.go:143] virtualization: kvm guest
	I1123 08:45:01.750156  329090 out.go:179] * [embed-certs-756339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:45:01.753393  329090 notify.go:221] Checking for updates...
	I1123 08:45:01.753398  329090 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:45:01.755146  329090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:45:01.756598  329090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:01.757836  329090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:45:01.758954  329090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:45:01.760360  329090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:45:01.765276  329090 config.go:182] Loaded profile config "default-k8s-diff-port-726261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765522  329090 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765681  329090 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:45:01.765827  329090 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:45:01.800644  329090 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:45:01.801313  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.871017  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.860213573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.871190  329090 docker.go:319] overlay module found
	I1123 08:45:01.872879  329090 out.go:179] * Using the docker driver based on user configuration
	I1123 08:45:01.874146  329090 start.go:309] selected driver: docker
	I1123 08:45:01.874172  329090 start.go:927] validating driver "docker" against <nil>
	I1123 08:45:01.874185  329090 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:45:01.874731  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.950283  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.938442114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.950526  329090 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:45:01.950805  329090 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.952251  329090 out.go:179] * Using Docker driver with root privileges
	I1123 08:45:01.953421  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:01.953493  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:01.953508  329090 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:45:01.953584  329090 start.go:353] cluster config:
	{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:01.954827  329090 out.go:179] * Starting "embed-certs-756339" primary control-plane node in "embed-certs-756339" cluster
	I1123 08:45:01.955848  329090 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:45:01.957107  329090 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:45:01.958365  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:01.958393  329090 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:45:01.958408  329090 cache.go:65] Caching tarball of preloaded images
	I1123 08:45:01.958465  329090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:45:01.958507  329090 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:45:01.958523  329090 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:45:01.958635  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:01.958661  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json: {Name:mk2bf238bbe57398e8f0e67e0ff345b4c996e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:01.983475  329090 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:45:01.983497  329090 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:45:01.983513  329090 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:45:01.983540  329090 start.go:360] acquireMachinesLock for embed-certs-756339: {Name:mk2607c5ea38ca6bd330e0a548b36202f67f84a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:01.983642  329090 start.go:364] duration metric: took 84.653µs to acquireMachinesLock for "embed-certs-756339"
	I1123 08:45:01.983672  329090 start.go:93] Provisioning new machine with config: &{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:01.983792  329090 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:45:01.986901  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.692445857s)
	I1123 08:45:01.987002  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.670756175s)
	I1123 08:45:01.987136  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.507320621s)
	I1123 08:45:01.987186  323816 api_server.go:72] duration metric: took 2.902108336s to wait for apiserver process to appear ...
	I1123 08:45:01.987204  323816 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:01.987282  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:01.988808  323816 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-187607 addons enable metrics-server
	
	I1123 08:45:01.992707  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:45:01.992732  323816 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:45:01.994529  323816 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 08:45:01.422757  323135 addons.go:530] duration metric: took 3.555416147s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:01.910007  323135 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:45:01.915784  323135 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:45:01.917062  323135 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:01.917089  323135 api_server.go:131] duration metric: took 507.92158ms to wait for apiserver health ...
	I1123 08:45:01.917100  323135 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:01.921785  323135 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:01.921998  323135 system_pods.go:61] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.922039  323135 system_pods.go:61] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.922068  323135 system_pods.go:61] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.922079  323135 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.922087  323135 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.922095  323135 system_pods.go:61] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.922107  323135 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.922115  323135 system_pods.go:61] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.922124  323135 system_pods.go:74] duration metric: took 5.016936ms to wait for pod list to return data ...
	I1123 08:45:01.922189  323135 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:01.925409  323135 default_sa.go:45] found service account: "default"
	I1123 08:45:01.925452  323135 default_sa.go:55] duration metric: took 3.245595ms for default service account to be created ...
	I1123 08:45:01.925463  323135 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:01.931804  323135 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:01.931872  323135 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.931898  323135 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.931961  323135 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.931995  323135 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.932018  323135 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.932037  323135 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.932066  323135 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.932076  323135 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.932086  323135 system_pods.go:126] duration metric: took 6.61665ms to wait for k8s-apps to be running ...
	I1123 08:45:01.932097  323135 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:01.932143  323135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:01.947263  323135 system_svc.go:56] duration metric: took 15.160659ms WaitForService to wait for kubelet
	I1123 08:45:01.947298  323135 kubeadm.go:587] duration metric: took 4.08017724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.947325  323135 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:01.950481  323135 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:01.950509  323135 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:01.950526  323135 node_conditions.go:105] duration metric: took 3.194245ms to run NodePressure ...
	I1123 08:45:01.950541  323135 start.go:242] waiting for startup goroutines ...
	I1123 08:45:01.950555  323135 start.go:247] waiting for cluster config update ...
	I1123 08:45:01.950571  323135 start.go:256] writing updated cluster config ...
	I1123 08:45:01.950876  323135 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:01.955038  323135 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:01.958449  323135 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:03.965246  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:01.995584  323816 addons.go:530] duration metric: took 2.910424664s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:02.487321  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:02.491678  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:45:02.492738  323816 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:02.492762  323816 api_server.go:131] duration metric: took 505.498506ms to wait for apiserver health ...
	I1123 08:45:02.492770  323816 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:02.496254  323816 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:02.496282  323816 system_pods.go:61] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.496290  323816 system_pods.go:61] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.496296  323816 system_pods.go:61] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.496302  323816 system_pods.go:61] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.496310  323816 system_pods.go:61] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.496317  323816 system_pods.go:61] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.496324  323816 system_pods.go:61] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.496334  323816 system_pods.go:61] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.496340  323816 system_pods.go:74] duration metric: took 3.565076ms to wait for pod list to return data ...
	I1123 08:45:02.496348  323816 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:02.498409  323816 default_sa.go:45] found service account: "default"
	I1123 08:45:02.498426  323816 default_sa.go:55] duration metric: took 2.073405ms for default service account to be created ...
	I1123 08:45:02.498434  323816 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:02.500853  323816 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.500888  323816 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.500899  323816 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.500912  323816 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.500929  323816 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.500941  323816 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.500951  323816 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.500961  323816 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.500971  323816 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.500978  323816 system_pods.go:126] duration metric: took 2.538671ms to wait for k8s-apps to be running ...
	I1123 08:45:02.500991  323816 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:02.501036  323816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:02.522199  323816 system_svc.go:56] duration metric: took 21.201972ms WaitForService to wait for kubelet
	I1123 08:45:02.522225  323816 kubeadm.go:587] duration metric: took 3.437147085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:02.522246  323816 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:02.524870  323816 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:02.524905  323816 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:02.524925  323816 node_conditions.go:105] duration metric: took 2.673388ms to run NodePressure ...
	I1123 08:45:02.524943  323816 start.go:242] waiting for startup goroutines ...
	I1123 08:45:02.524953  323816 start.go:247] waiting for cluster config update ...
	I1123 08:45:02.524970  323816 start.go:256] writing updated cluster config ...
	I1123 08:45:02.525241  323816 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:02.529440  323816 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:02.532956  323816 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:04.545550  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:01.985817  329090 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:45:01.986054  329090 start.go:159] libmachine.API.Create for "embed-certs-756339" (driver="docker")
	I1123 08:45:01.986094  329090 client.go:173] LocalClient.Create starting
	I1123 08:45:01.986158  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 08:45:01.986202  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986228  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986299  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 08:45:01.986331  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986349  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986747  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:45:02.006351  329090 cli_runner.go:211] docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:45:02.006428  329090 network_create.go:284] running [docker network inspect embed-certs-756339] to gather additional debugging logs...
	I1123 08:45:02.006453  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339
	W1123 08:45:02.024029  329090 cli_runner.go:211] docker network inspect embed-certs-756339 returned with exit code 1
	I1123 08:45:02.024056  329090 network_create.go:287] error running [docker network inspect embed-certs-756339]: docker network inspect embed-certs-756339: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-756339 not found
	I1123 08:45:02.024076  329090 network_create.go:289] output of [docker network inspect embed-certs-756339]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-756339 not found
	
	** /stderr **
	I1123 08:45:02.024188  329090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:02.041589  329090 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0e05b954e81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:02:f0:06:d5:34} reservation:<nil>}
	I1123 08:45:02.042147  329090 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2604e536ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:ab:00:4e:41:e6} reservation:<nil>}
	I1123 08:45:02.042884  329090 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce97320dd675 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:5a:a5:0b:c0:b0} reservation:<nil>}
	I1123 08:45:02.043340  329090 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c80b7bca17a7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:f1:41:59:09:b5} reservation:<nil>}
	I1123 08:45:02.043937  329090 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8e58961f3024 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b6:f0:e4:3c:63:d5} reservation:<nil>}
	I1123 08:45:02.044437  329090 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-e4a86ee726da IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ae:37:bc:fe:9d:3a} reservation:<nil>}
	I1123 08:45:02.045221  329090 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06cd0}
	I1123 08:45:02.045242  329090 network_create.go:124] attempt to create docker network embed-certs-756339 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1123 08:45:02.045287  329090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-756339 embed-certs-756339
	I1123 08:45:02.095267  329090 network_create.go:108] docker network embed-certs-756339 192.168.103.0/24 created
	I1123 08:45:02.095296  329090 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-756339" container
	I1123 08:45:02.095350  329090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:45:02.111533  329090 cli_runner.go:164] Run: docker volume create embed-certs-756339 --label name.minikube.sigs.k8s.io=embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:45:02.128824  329090 oci.go:103] Successfully created a docker volume embed-certs-756339
	I1123 08:45:02.128896  329090 cli_runner.go:164] Run: docker run --rm --name embed-certs-756339-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --entrypoint /usr/bin/test -v embed-certs-756339:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:45:02.559029  329090 oci.go:107] Successfully prepared a docker volume embed-certs-756339
	I1123 08:45:02.559098  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:02.559108  329090 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:45:02.559163  329090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 08:45:06.464312  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:08.466215  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:06.707246  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:09.040137  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:11.046122  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:07.131448  329090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.572224972s)
	I1123 08:45:07.131484  329090 kic.go:203] duration metric: took 4.572370498s to extract preloaded images to volume ...
	W1123 08:45:07.131573  329090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:45:07.131616  329090 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:45:07.131860  329090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:45:07.219659  329090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-756339 --name embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-756339 --network embed-certs-756339 --ip 192.168.103.2 --volume embed-certs-756339:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:45:07.635482  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Running}}
	I1123 08:45:07.658965  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.681327  329090 cli_runner.go:164] Run: docker exec embed-certs-756339 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:45:07.737769  329090 oci.go:144] the created container "embed-certs-756339" has a running status.
	I1123 08:45:07.737802  329090 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa...
	I1123 08:45:07.895228  329090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:45:07.935222  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.958382  329090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:45:07.958405  329090 kic_runner.go:114] Args: [docker exec --privileged embed-certs-756339 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:45:08.015520  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:08.039803  329090 machine.go:94] provisionDockerMachine start ...
	I1123 08:45:08.039898  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:08.064345  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:08.064680  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:08.064723  329090 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:45:08.065347  329090 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47890->127.0.0.1:33131: read: connection reset by peer
	I1123 08:45:11.244730  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.244755  329090 ubuntu.go:182] provisioning hostname "embed-certs-756339"
	I1123 08:45:11.244812  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.273763  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.274055  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.274072  329090 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-756339 && echo "embed-certs-756339" | sudo tee /etc/hostname
	I1123 08:45:11.457570  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.457714  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.488146  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.488457  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.488485  329090 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-756339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-756339/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-756339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:45:11.660198  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:45:11.660362  329090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:45:11.660453  329090 ubuntu.go:190] setting up certificates
	I1123 08:45:11.660471  329090 provision.go:84] configureAuth start
	I1123 08:45:11.661011  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:11.684982  329090 provision.go:143] copyHostCerts
	I1123 08:45:11.685043  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:45:11.685053  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:45:11.685140  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:45:11.685249  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:45:11.685255  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:45:11.685292  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:45:11.685383  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:45:11.685391  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:45:11.685427  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:45:11.685506  329090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.embed-certs-756339 san=[127.0.0.1 192.168.103.2 embed-certs-756339 localhost minikube]
	I1123 08:45:11.758697  329090 provision.go:177] copyRemoteCerts
	I1123 08:45:11.758777  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:45:11.758833  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.787179  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:11.905965  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:45:11.934744  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:45:11.961707  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:45:11.985963  329090 provision.go:87] duration metric: took 325.479379ms to configureAuth
	I1123 08:45:11.985992  329090 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:45:11.986220  329090 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:11.986358  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.011499  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:12.011833  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:12.011872  329090 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:45:12.373361  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:45:12.373388  329090 machine.go:97] duration metric: took 4.333562614s to provisionDockerMachine
	I1123 08:45:12.373402  329090 client.go:176] duration metric: took 10.387301049s to LocalClient.Create
	I1123 08:45:12.373431  329090 start.go:167] duration metric: took 10.387376613s to libmachine.API.Create "embed-certs-756339"
	I1123 08:45:12.373444  329090 start.go:293] postStartSetup for "embed-certs-756339" (driver="docker")
	I1123 08:45:12.373458  329090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:45:12.373521  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:45:12.373575  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.394472  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.505303  329090 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:45:12.509881  329090 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:45:12.509946  329090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:45:12.509962  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:45:12.510025  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:45:12.510127  329090 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:45:12.510256  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:45:12.520339  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:45:12.547586  329090 start.go:296] duration metric: took 174.127267ms for postStartSetup
	I1123 08:45:12.548040  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.572325  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:12.572597  329090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:45:12.572652  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.595241  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.708576  329090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:45:12.713786  329090 start.go:128] duration metric: took 10.729979645s to createHost
	I1123 08:45:12.713812  329090 start.go:83] releasing machines lock for "embed-certs-756339", held for 10.730153164s
	I1123 08:45:12.713888  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.744434  329090 ssh_runner.go:195] Run: cat /version.json
	I1123 08:45:12.744496  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.744678  329090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:45:12.744776  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.771659  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.771722  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.970377  329090 ssh_runner.go:195] Run: systemctl --version
	I1123 08:45:12.980003  329090 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:45:13.031076  329090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:45:13.037986  329090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:45:13.038091  329090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:45:13.078655  329090 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:45:13.078678  329090 start.go:496] detecting cgroup driver to use...
	I1123 08:45:13.078778  329090 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:45:13.078826  329090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:45:13.102501  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:45:13.121011  329090 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:45:13.121088  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:45:13.144025  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:45:13.166610  329090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:45:13.266885  329090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:45:13.383738  329090 docker.go:234] disabling docker service ...
	I1123 08:45:13.383808  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:45:13.408902  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:45:13.425055  329090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:45:13.533375  329090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:45:13.641970  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:45:13.655349  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:45:13.672802  329090 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:45:13.672859  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.682619  329090 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:45:13.682671  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.691340  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.700633  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.709880  329090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:45:13.717844  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.726872  329090 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.741035  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.750011  329090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:45:13.757738  329090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:45:13.764834  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:13.846176  329090 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:45:15.041719  329090 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.195506975s)
	I1123 08:45:15.041743  329090 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:45:15.041806  329090 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:45:15.046071  329090 start.go:564] Will wait 60s for crictl version
	I1123 08:45:15.046136  329090 ssh_runner.go:195] Run: which crictl
	I1123 08:45:15.049573  329090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:45:15.078843  329090 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:45:15.078920  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.108962  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.139712  329090 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 08:45:10.968346  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.466785  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.540283  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:16.038123  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 23 08:44:45 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:45.134491151Z" level=info msg="Started container" PID=1758 containerID=b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper id=694058b3-3acd-4161-9bc1-2166b6c20e5b name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f676acb5cbcb073b2e7fa8ecad890abc414ca394eadc5279f65a143c4fa071a
	Nov 23 08:44:46 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:46.089901364Z" level=info msg="Removing container: 0e984beb996d7e790f8a5603b3d2ea53a5721b1c7e305fb299ac9e57bbeb64d1" id=d422007e-6a1b-485d-925c-ea31e89dfba3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:44:46 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:46.098846639Z" level=info msg="Removed container 0e984beb996d7e790f8a5603b3d2ea53a5721b1c7e305fb299ac9e57bbeb64d1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper" id=d422007e-6a1b-485d-925c-ea31e89dfba3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.120493391Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a1695869-061e-4ce6-9f90-2ab7667c2ec7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.121768659Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=267b1280-3a4d-466a-b592-c0d6734a51e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.122804081Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2a169f96-e2c0-4b49-9529-d1ffeceff8f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.122950581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.127672379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.12788024Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b7cf72c7a7241c2e9054335546664e633f53e6b3740bb66500bc829961b9aff5/merged/etc/passwd: no such file or directory"
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.127917277Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b7cf72c7a7241c2e9054335546664e633f53e6b3740bb66500bc829961b9aff5/merged/etc/group: no such file or directory"
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.128204852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.161871764Z" level=info msg="Created container cb4fd533dc80ea4296c3272defe70f1c36f6c1819a3a8d39ce2cd4d9e3af9f17: kube-system/storage-provisioner/storage-provisioner" id=2a169f96-e2c0-4b49-9529-d1ffeceff8f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.162527968Z" level=info msg="Starting container: cb4fd533dc80ea4296c3272defe70f1c36f6c1819a3a8d39ce2cd4d9e3af9f17" id=091529a9-fb73-402a-aa2c-76c92de1ad8e name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:44:57 old-k8s-version-057894 crio[568]: time="2025-11-23T08:44:57.164496572Z" level=info msg="Started container" PID=1773 containerID=cb4fd533dc80ea4296c3272defe70f1c36f6c1819a3a8d39ce2cd4d9e3af9f17 description=kube-system/storage-provisioner/storage-provisioner id=091529a9-fb73-402a-aa2c-76c92de1ad8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8e0c8f33e1add530d7dca23a4dd9eff53ad6bc54071c33a141f8cedc5514f0f
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.021302841Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c18bc1e6-18d4-4b06-881a-d275a2e7d170 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.02220561Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2f8bd974-cc58-44bd-8afa-aa4021437268 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.023169713Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper" id=94d1ae87-5a30-4625-9a6c-4a0e9f1a594f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.023318072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.032351039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.033061284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.070567832Z" level=info msg="Created container 0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper" id=94d1ae87-5a30-4625-9a6c-4a0e9f1a594f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.071245941Z" level=info msg="Starting container: 0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378" id=946ce8bd-d157-4bc4-aa63-fa4da1c9849c name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.075870131Z" level=info msg="Started container" PID=1807 containerID=0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper id=946ce8bd-d157-4bc4-aa63-fa4da1c9849c name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f676acb5cbcb073b2e7fa8ecad890abc414ca394eadc5279f65a143c4fa071a
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.150011204Z" level=info msg="Removing container: b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73" id=954f33a3-ee1f-4b63-8225-19b0f7cd7b39 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:03 old-k8s-version-057894 crio[568]: time="2025-11-23T08:45:03.161126151Z" level=info msg="Removed container b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq/dashboard-metrics-scraper" id=954f33a3-ee1f-4b63-8225-19b0f7cd7b39 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	0ec92733e79bc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   1f676acb5cbcb       dashboard-metrics-scraper-5f989dc9cf-f6dfq       kubernetes-dashboard
	cb4fd533dc80e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   e8e0c8f33e1ad       storage-provisioner                              kube-system
	ca13a80691257       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   34 seconds ago      Running             kubernetes-dashboard        0                   bacaaf44cb1bb       kubernetes-dashboard-8694d4445c-rlnf7            kubernetes-dashboard
	833de90f7dd18       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           50 seconds ago      Running             coredns                     0                   cd18df29f8fef       coredns-5dd5756b68-t8zg8                         kube-system
	b4e4286b6cafb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   8a5fa6d8d4b01       busybox                                          default
	45c3f69cfbb9e       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           50 seconds ago      Running             kube-proxy                  0                   92f8107e20931       kube-proxy-6t2mg                                 kube-system
	39e55cc565f83       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   e8e0c8f33e1ad       storage-provisioner                              kube-system
	b85f36938e981       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   a7202dba32d00       kindnet-lwhjw                                    kube-system
	35f8086b1de4e       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   c10d531ca7aa8       kube-scheduler-old-k8s-version-057894            kube-system
	62bca8b239fd2       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   886bb088bdbfd       kube-controller-manager-old-k8s-version-057894   kube-system
	46e574a85cdd5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   60020468c9772       etcd-old-k8s-version-057894                      kube-system
	5ed59b21f5fe5       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   012b03dc57916       kube-apiserver-old-k8s-version-057894            kube-system
	
	
	==> coredns [833de90f7dd18d80eab0ca9aa9103b5aa80cc42b8b7287f8b42b5a3b32e0adeb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60829 - 44341 "HINFO IN 4128481511327025327.5652468065265006647. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.472080843s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-057894
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-057894
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=old-k8s-version-057894
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_43_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:43:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-057894
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:45:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:44:56 +0000   Sun, 23 Nov 2025 08:43:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:44:56 +0000   Sun, 23 Nov 2025 08:43:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:44:56 +0000   Sun, 23 Nov 2025 08:43:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:44:56 +0000   Sun, 23 Nov 2025 08:43:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-057894
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c7ef2a9c-d9fc-4762-980c-1ef217fcf6e1
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-t8zg8                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-old-k8s-version-057894                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-lwhjw                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-057894             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-057894    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-6t2mg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-057894             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f6dfq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-rlnf7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-057894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s                 kubelet          Node old-k8s-version-057894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s                 kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m2s                 kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-057894 event: Registered Node old-k8s-version-057894 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-057894 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node old-k8s-version-057894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node old-k8s-version-057894 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-057894 event: Registered Node old-k8s-version-057894 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [46e574a85cdd50d2ed3dfea9bf9e72260185653dd7313da97ccc3c575be7c1e6] <==
	{"level":"info","ts":"2025-11-23T08:44:23.584293Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T08:44:23.584425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-23T08:44:23.585079Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:44:23.585492Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:44:23.5856Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:44:23.591276Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T08:44:23.5918Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T08:44:23.592059Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T08:44:23.592786Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T08:44:23.592841Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:44:24.773942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T08:44:24.773997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:44:24.774032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T08:44:24.774051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T08:44:24.77406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T08:44:24.774071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-23T08:44:24.774081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T08:44:24.775387Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-057894 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:44:24.775391Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:44:24.775432Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:44:24.77602Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:44:24.776111Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T08:44:24.776893Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:44:24.777928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T08:45:06.950679Z","caller":"traceutil/trace.go:171","msg":"trace[714815749] transaction","detail":"{read_only:false; response_revision:653; number_of_response:1; }","duration":"118.741314ms","start":"2025-11-23T08:45:06.831916Z","end":"2025-11-23T08:45:06.950658Z","steps":["trace[714815749] 'process raft request'  (duration: 118.613987ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:45:17 up  1:27,  0 user,  load average: 5.94, 4.08, 2.50
	Linux old-k8s-version-057894 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b85f36938e98155acb198f46eeda831f2f859afb475d32fe72dec1a0e6723666] <==
	I1123 08:44:26.638894       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:44:26.639138       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 08:44:26.639316       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:44:26.639338       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:44:26.639364       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:44:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:44:26.995078       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:44:26.995128       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:44:26.995150       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:44:26.995304       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:44:27.495485       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:44:27.495522       1 metrics.go:72] Registering metrics
	I1123 08:44:27.495587       1 controller.go:711] "Syncing nftables rules"
	I1123 08:44:36.935487       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:44:36.935563       1 main.go:301] handling current node
	I1123 08:44:46.935648       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:44:46.935680       1 main.go:301] handling current node
	I1123 08:44:56.935864       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:44:56.935901       1 main.go:301] handling current node
	I1123 08:45:06.935424       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:45:06.935458       1 main.go:301] handling current node
	I1123 08:45:16.936192       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 08:45:16.936233       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5ed59b21f5fe5a105c3165b1f30786d03b6ba7fda1e27532fd0541a8a4b0df67] <==
	I1123 08:44:25.875653       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1123 08:44:25.948605       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:44:25.975485       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1123 08:44:25.975537       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 08:44:25.975777       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:44:25.976890       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 08:44:25.977489       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 08:44:25.978207       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:44:25.979131       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 08:44:25.979638       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:44:25.979652       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:44:25.979659       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:44:25.979667       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:44:26.017877       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 08:44:26.879671       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:44:27.077419       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:44:27.108599       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:44:27.124714       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:44:27.132385       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:44:27.138755       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:44:27.174315       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.61.250"}
	I1123 08:44:27.184213       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.111.205"}
	I1123 08:44:38.154842       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 08:44:38.182340       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:44:38.221663       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [62bca8b239fd282ce38b86b21b9897cfdd1cd66996c68c577fb4d9a16baca0f8] <==
	I1123 08:44:38.244152       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-f6dfq"
	I1123 08:44:38.245229       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:44:38.247677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.799151ms"
	I1123 08:44:38.252166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="22.253351ms"
	I1123 08:44:38.253984       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.249682ms"
	I1123 08:44:38.254081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="55.878µs"
	I1123 08:44:38.257160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="4.954896ms"
	I1123 08:44:38.257234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.799µs"
	I1123 08:44:38.258930       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="37.094µs"
	I1123 08:44:38.259758       1 shared_informer.go:318] Caches are synced for stateful set
	I1123 08:44:38.266165       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.747µs"
	I1123 08:44:38.300794       1 shared_informer.go:318] Caches are synced for daemon sets
	I1123 08:44:38.341395       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:44:38.658989       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:44:38.680351       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:44:38.680375       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:44:43.096603       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.099337ms"
	I1123 08:44:43.097716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.848µs"
	I1123 08:44:45.098171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.109µs"
	I1123 08:44:46.098806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.342µs"
	I1123 08:44:47.102818       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.959µs"
	I1123 08:44:57.869821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.198633ms"
	I1123 08:44:57.870081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.95µs"
	I1123 08:45:03.162369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="148.072µs"
	I1123 08:45:08.565287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.161µs"
	
	
	==> kube-proxy [45c3f69cfbb9e95b89ecc13be97e72337469a5dde7d9dafd2d7eb683d2e480a3] <==
	I1123 08:44:26.488357       1 server_others.go:69] "Using iptables proxy"
	I1123 08:44:26.501270       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 08:44:26.537817       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:44:26.544945       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:44:26.544989       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:44:26.545001       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:44:26.545049       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:44:26.545812       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:44:26.545877       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:26.552934       1 config.go:188] "Starting service config controller"
	I1123 08:44:26.552952       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:44:26.552981       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:44:26.552992       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:44:26.553561       1 config.go:315] "Starting node config controller"
	I1123 08:44:26.553569       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:44:26.653698       1 shared_informer.go:318] Caches are synced for node config
	I1123 08:44:26.653708       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 08:44:26.653716       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [35f8086b1de4e31006310dbc9225c47fc7ce015e3238258161e81fc2d1c7f4bd] <==
	I1123 08:44:24.019647       1 serving.go:348] Generated self-signed cert in-memory
	W1123 08:44:25.933317       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 08:44:25.933362       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:44:25.933392       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 08:44:25.933403       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 08:44:25.955559       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 08:44:25.955588       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:44:25.958187       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:44:25.958232       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 08:44:25.959404       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1123 08:44:25.960002       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 08:44:26.059325       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:44:38 old-k8s-version-057894 kubelet[730]: I1123 08:44:38.250571     730 topology_manager.go:215] "Topology Admit Handler" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-f6dfq"
	Nov 23 08:44:38 old-k8s-version-057894 kubelet[730]: I1123 08:44:38.302490     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgqqm\" (UniqueName: \"kubernetes.io/projected/f0454ce9-5b09-4574-a70a-0566e31c41b2-kube-api-access-fgqqm\") pod \"dashboard-metrics-scraper-5f989dc9cf-f6dfq\" (UID: \"f0454ce9-5b09-4574-a70a-0566e31c41b2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq"
	Nov 23 08:44:38 old-k8s-version-057894 kubelet[730]: I1123 08:44:38.302550     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0171abf9-abe8-4871-8715-2ece3d41ce1a-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-rlnf7\" (UID: \"0171abf9-abe8-4871-8715-2ece3d41ce1a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-rlnf7"
	Nov 23 08:44:38 old-k8s-version-057894 kubelet[730]: I1123 08:44:38.302579     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f0454ce9-5b09-4574-a70a-0566e31c41b2-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-f6dfq\" (UID: \"f0454ce9-5b09-4574-a70a-0566e31c41b2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq"
	Nov 23 08:44:38 old-k8s-version-057894 kubelet[730]: I1123 08:44:38.302664     730 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkftz\" (UniqueName: \"kubernetes.io/projected/0171abf9-abe8-4871-8715-2ece3d41ce1a-kube-api-access-kkftz\") pod \"kubernetes-dashboard-8694d4445c-rlnf7\" (UID: \"0171abf9-abe8-4871-8715-2ece3d41ce1a\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-rlnf7"
	Nov 23 08:44:45 old-k8s-version-057894 kubelet[730]: I1123 08:44:45.084057     730 scope.go:117] "RemoveContainer" containerID="0e984beb996d7e790f8a5603b3d2ea53a5721b1c7e305fb299ac9e57bbeb64d1"
	Nov 23 08:44:45 old-k8s-version-057894 kubelet[730]: I1123 08:44:45.098399     730 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-rlnf7" podStartSLOduration=3.262375916 podCreationTimestamp="2025-11-23 08:44:38 +0000 UTC" firstStartedPulling="2025-11-23 08:44:38.568750122 +0000 UTC m=+15.639160804" lastFinishedPulling="2025-11-23 08:44:42.404713931 +0000 UTC m=+19.475124614" observedRunningTime="2025-11-23 08:44:43.090267413 +0000 UTC m=+20.160678108" watchObservedRunningTime="2025-11-23 08:44:45.098339726 +0000 UTC m=+22.168750420"
	Nov 23 08:44:46 old-k8s-version-057894 kubelet[730]: I1123 08:44:46.088573     730 scope.go:117] "RemoveContainer" containerID="0e984beb996d7e790f8a5603b3d2ea53a5721b1c7e305fb299ac9e57bbeb64d1"
	Nov 23 08:44:46 old-k8s-version-057894 kubelet[730]: I1123 08:44:46.088824     730 scope.go:117] "RemoveContainer" containerID="b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73"
	Nov 23 08:44:46 old-k8s-version-057894 kubelet[730]: E1123 08:44:46.089195     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f6dfq_kubernetes-dashboard(f0454ce9-5b09-4574-a70a-0566e31c41b2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2"
	Nov 23 08:44:47 old-k8s-version-057894 kubelet[730]: I1123 08:44:47.092964     730 scope.go:117] "RemoveContainer" containerID="b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73"
	Nov 23 08:44:47 old-k8s-version-057894 kubelet[730]: E1123 08:44:47.093272     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f6dfq_kubernetes-dashboard(f0454ce9-5b09-4574-a70a-0566e31c41b2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2"
	Nov 23 08:44:48 old-k8s-version-057894 kubelet[730]: I1123 08:44:48.552747     730 scope.go:117] "RemoveContainer" containerID="b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73"
	Nov 23 08:44:48 old-k8s-version-057894 kubelet[730]: E1123 08:44:48.553005     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f6dfq_kubernetes-dashboard(f0454ce9-5b09-4574-a70a-0566e31c41b2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2"
	Nov 23 08:44:57 old-k8s-version-057894 kubelet[730]: I1123 08:44:57.119924     730 scope.go:117] "RemoveContainer" containerID="39e55cc565f8340fb7399995b588a6585102abab97cf96c43e1cd271099cb02d"
	Nov 23 08:45:03 old-k8s-version-057894 kubelet[730]: I1123 08:45:03.020623     730 scope.go:117] "RemoveContainer" containerID="b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73"
	Nov 23 08:45:03 old-k8s-version-057894 kubelet[730]: I1123 08:45:03.148680     730 scope.go:117] "RemoveContainer" containerID="b835e9c15dbec3a4ea13dc89995285b42086d41c98b27eb7f8425e3d2921ba73"
	Nov 23 08:45:03 old-k8s-version-057894 kubelet[730]: I1123 08:45:03.148883     730 scope.go:117] "RemoveContainer" containerID="0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378"
	Nov 23 08:45:03 old-k8s-version-057894 kubelet[730]: E1123 08:45:03.149270     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f6dfq_kubernetes-dashboard(f0454ce9-5b09-4574-a70a-0566e31c41b2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2"
	Nov 23 08:45:08 old-k8s-version-057894 kubelet[730]: I1123 08:45:08.552443     730 scope.go:117] "RemoveContainer" containerID="0ec92733e79bcdc395ca7d52d4f1bb0e3fe7f3e9aa2510cda2531797a6eb3378"
	Nov 23 08:45:08 old-k8s-version-057894 kubelet[730]: E1123 08:45:08.553242     730 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f6dfq_kubernetes-dashboard(f0454ce9-5b09-4574-a70a-0566e31c41b2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f6dfq" podUID="f0454ce9-5b09-4574-a70a-0566e31c41b2"
	Nov 23 08:45:12 old-k8s-version-057894 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:45:12 old-k8s-version-057894 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:45:12 old-k8s-version-057894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 08:45:12 old-k8s-version-057894 systemd[1]: kubelet.service: Consumed 1.428s CPU time.
	
	
	==> kubernetes-dashboard [ca13a8069125754e0a5cb3de46fa71d0a79b3e2c2018ddcc6d8f0367b7d4e1d9] <==
	2025/11/23 08:44:42 Using namespace: kubernetes-dashboard
	2025/11/23 08:44:42 Using in-cluster config to connect to apiserver
	2025/11/23 08:44:42 Using secret token for csrf signing
	2025/11/23 08:44:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:44:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:44:42 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 08:44:42 Generating JWE encryption key
	2025/11/23 08:44:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:44:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:44:42 Initializing JWE encryption key from synchronized object
	2025/11/23 08:44:42 Creating in-cluster Sidecar client
	2025/11/23 08:44:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:44:42 Serving insecurely on HTTP port: 9090
	2025/11/23 08:45:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:44:42 Starting overwatch
	
	
	==> storage-provisioner [39e55cc565f8340fb7399995b588a6585102abab97cf96c43e1cd271099cb02d] <==
	I1123 08:44:26.429112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:44:56.431261       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cb4fd533dc80ea4296c3272defe70f1c36f6c1819a3a8d39ce2cd4d9e3af9f17] <==
	I1123 08:44:57.178372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:44:57.188092       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:44:57.188144       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:45:14.654237       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:14.654385       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18c4da37-3156-4c26-a03d-1ad0569c542a", APIVersion:"v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-057894_5f629f25-40d8-4b16-9995-37043732de80 became leader
	I1123 08:45:14.654439       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-057894_5f629f25-40d8-4b16-9995-37043732de80!
	I1123 08:45:14.755748       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-057894_5f629f25-40d8-4b16-9995-37043732de80!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-057894 -n old-k8s-version-057894
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-057894 -n old-k8s-version-057894: exit status 2 (344.170557ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-057894 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-726261 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-726261 --alsologtostderr -v=1: exit status 80 (1.718312019s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-726261 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:45:52.613100  335753 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:52.613193  335753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:52.613201  335753 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:52.613206  335753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:52.613447  335753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:45:52.613656  335753 out.go:368] Setting JSON to false
	I1123 08:45:52.613675  335753 mustload.go:66] Loading cluster: default-k8s-diff-port-726261
	I1123 08:45:52.614028  335753 config.go:182] Loaded profile config "default-k8s-diff-port-726261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:52.614389  335753 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-726261 --format={{.State.Status}}
	I1123 08:45:52.632351  335753 host.go:66] Checking if "default-k8s-diff-port-726261" exists ...
	I1123 08:45:52.632574  335753 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:52.692538  335753 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:45:52.682272243 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:52.693127  335753 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-726261 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 08:45:52.694817  335753 out.go:179] * Pausing node default-k8s-diff-port-726261 ... 
	I1123 08:45:52.695866  335753 host.go:66] Checking if "default-k8s-diff-port-726261" exists ...
	I1123 08:45:52.696109  335753 ssh_runner.go:195] Run: systemctl --version
	I1123 08:45:52.696155  335753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-726261
	I1123 08:45:52.713922  335753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/default-k8s-diff-port-726261/id_rsa Username:docker}
	I1123 08:45:52.812611  335753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:52.840878  335753 pause.go:52] kubelet running: true
	I1123 08:45:52.840945  335753 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:45:53.002369  335753 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:45:53.002443  335753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:45:53.066636  335753 cri.go:89] found id: "4df2466c9bde6f0fd82a87d84de8b6e968bf33006410e1028c82880ce2aa8c70"
	I1123 08:45:53.066657  335753 cri.go:89] found id: "7ef10f196889c9d3ac8c5dd4fd68549002b4069ef13629eda75f63e301942ebf"
	I1123 08:45:53.066661  335753 cri.go:89] found id: "1da1b5290a3ae07eb95a2f27c1a53ff4324a0646fd34b71680617d7f9aaaf8fb"
	I1123 08:45:53.066665  335753 cri.go:89] found id: "5570992f3d35e1c9011ec15df9afc2ce9eba453be9b90083b5f4b396eab5dd4e"
	I1123 08:45:53.066667  335753 cri.go:89] found id: "ccbc32cc46374e63b5543296cbf640d9549909d2c0eceece3434d9968f9a5845"
	I1123 08:45:53.066672  335753 cri.go:89] found id: "b50ab2696e1e67f0ac0d0181ec5963bc5d4a3d2b32af3b3f35daa7d47a58c5e9"
	I1123 08:45:53.066675  335753 cri.go:89] found id: "246dd1e6858bda7bf5fe65c5645c08c92b5d0ed231ce7b5b4abd97fac4802f8f"
	I1123 08:45:53.066677  335753 cri.go:89] found id: "dc3d29d35b622d6b93507aea04eac9baab619145bad0ffc805501a72a5c213eb"
	I1123 08:45:53.066680  335753 cri.go:89] found id: "460c9f8d4b0648fd809721216954ce6522f68a315fdb7c8fbacbaeb8288f1ffb"
	I1123 08:45:53.066700  335753 cri.go:89] found id: "03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a"
	I1123 08:45:53.066705  335753 cri.go:89] found id: "3def24cbd2530ecf5a755e903ca77a95246cde5cba1a19043250f071201a1518"
	I1123 08:45:53.066709  335753 cri.go:89] found id: ""
	I1123 08:45:53.066748  335753 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:45:53.077716  335753 retry.go:31] will retry after 373.650525ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:53Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:45:53.452134  335753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:53.466358  335753 pause.go:52] kubelet running: false
	I1123 08:45:53.466413  335753 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:45:53.616268  335753 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:45:53.616373  335753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:45:53.685972  335753 cri.go:89] found id: "4df2466c9bde6f0fd82a87d84de8b6e968bf33006410e1028c82880ce2aa8c70"
	I1123 08:45:53.685996  335753 cri.go:89] found id: "7ef10f196889c9d3ac8c5dd4fd68549002b4069ef13629eda75f63e301942ebf"
	I1123 08:45:53.686001  335753 cri.go:89] found id: "1da1b5290a3ae07eb95a2f27c1a53ff4324a0646fd34b71680617d7f9aaaf8fb"
	I1123 08:45:53.686004  335753 cri.go:89] found id: "5570992f3d35e1c9011ec15df9afc2ce9eba453be9b90083b5f4b396eab5dd4e"
	I1123 08:45:53.686007  335753 cri.go:89] found id: "ccbc32cc46374e63b5543296cbf640d9549909d2c0eceece3434d9968f9a5845"
	I1123 08:45:53.686010  335753 cri.go:89] found id: "b50ab2696e1e67f0ac0d0181ec5963bc5d4a3d2b32af3b3f35daa7d47a58c5e9"
	I1123 08:45:53.686013  335753 cri.go:89] found id: "246dd1e6858bda7bf5fe65c5645c08c92b5d0ed231ce7b5b4abd97fac4802f8f"
	I1123 08:45:53.686016  335753 cri.go:89] found id: "dc3d29d35b622d6b93507aea04eac9baab619145bad0ffc805501a72a5c213eb"
	I1123 08:45:53.686019  335753 cri.go:89] found id: "460c9f8d4b0648fd809721216954ce6522f68a315fdb7c8fbacbaeb8288f1ffb"
	I1123 08:45:53.686025  335753 cri.go:89] found id: "03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a"
	I1123 08:45:53.686028  335753 cri.go:89] found id: "3def24cbd2530ecf5a755e903ca77a95246cde5cba1a19043250f071201a1518"
	I1123 08:45:53.686033  335753 cri.go:89] found id: ""
	I1123 08:45:53.686077  335753 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:45:53.698834  335753 retry.go:31] will retry after 304.667066ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:53Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:45:54.004379  335753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:54.018962  335753 pause.go:52] kubelet running: false
	I1123 08:45:54.019015  335753 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:45:54.189471  335753 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:45:54.189545  335753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:45:54.252788  335753 cri.go:89] found id: "4df2466c9bde6f0fd82a87d84de8b6e968bf33006410e1028c82880ce2aa8c70"
	I1123 08:45:54.252810  335753 cri.go:89] found id: "7ef10f196889c9d3ac8c5dd4fd68549002b4069ef13629eda75f63e301942ebf"
	I1123 08:45:54.252814  335753 cri.go:89] found id: "1da1b5290a3ae07eb95a2f27c1a53ff4324a0646fd34b71680617d7f9aaaf8fb"
	I1123 08:45:54.252818  335753 cri.go:89] found id: "5570992f3d35e1c9011ec15df9afc2ce9eba453be9b90083b5f4b396eab5dd4e"
	I1123 08:45:54.252821  335753 cri.go:89] found id: "ccbc32cc46374e63b5543296cbf640d9549909d2c0eceece3434d9968f9a5845"
	I1123 08:45:54.252825  335753 cri.go:89] found id: "b50ab2696e1e67f0ac0d0181ec5963bc5d4a3d2b32af3b3f35daa7d47a58c5e9"
	I1123 08:45:54.252830  335753 cri.go:89] found id: "246dd1e6858bda7bf5fe65c5645c08c92b5d0ed231ce7b5b4abd97fac4802f8f"
	I1123 08:45:54.252834  335753 cri.go:89] found id: "dc3d29d35b622d6b93507aea04eac9baab619145bad0ffc805501a72a5c213eb"
	I1123 08:45:54.252839  335753 cri.go:89] found id: "460c9f8d4b0648fd809721216954ce6522f68a315fdb7c8fbacbaeb8288f1ffb"
	I1123 08:45:54.252847  335753 cri.go:89] found id: "03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a"
	I1123 08:45:54.252851  335753 cri.go:89] found id: "3def24cbd2530ecf5a755e903ca77a95246cde5cba1a19043250f071201a1518"
	I1123 08:45:54.252855  335753 cri.go:89] found id: ""
	I1123 08:45:54.252895  335753 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:45:54.267654  335753 out.go:203] 
	W1123 08:45:54.268842  335753 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:45:54.268862  335753 out.go:285] * 
	* 
	W1123 08:45:54.273155  335753 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:45:54.274148  335753 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-726261 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-726261
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-726261:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387",
	        "Created": "2025-11-23T08:43:38.364416328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:44:50.667804948Z",
	            "FinishedAt": "2025-11-23T08:44:49.711995231Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/hostname",
	        "HostsPath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/hosts",
	        "LogPath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387-json.log",
	        "Name": "/default-k8s-diff-port-726261",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-726261:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-726261",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387",
	                "LowerDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-726261",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-726261/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-726261",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-726261",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-726261",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2b8d0d77255d4c20bb4618e494392c10ec6841c5f07e8e595a7c649e69015b0",
	            "SandboxKey": "/var/run/docker/netns/a2b8d0d77255",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-726261": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8e58961f30240336633bec998e074fa68c1170ebe5fe0d36562f8ff59e516d42",
	                    "EndpointID": "9c28991b39a8f699eed76c94eb114497c8f7961eca4122f54a5ad0dc02f5935d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "52:4b:4b:82:83:df",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-726261",
	                        "55c5a560eb12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261: exit status 2 (355.422744ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-726261 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-726261 logs -n 25: (1.199087886s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-726261 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p no-preload-187607 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p newest-cni-653361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-726261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ addons  │ enable dashboard -p no-preload-187607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ newest-cni-653361 image list --format=json                                                                                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ pause   │ -p newest-cni-653361 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-177890                                                                                                                                                                                                               │ disable-driver-mounts-177890 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-057894 image list --format=json                                                                                                                                                                                               │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-057894 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ delete  │ -p old-k8s-version-057894                                                                                                                                                                                                                     │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-057894                                                                                                                                                                                                                     │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ default-k8s-diff-port-726261 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p default-k8s-diff-port-726261 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ image   │ no-preload-187607 image list --format=json                                                                                                                                                                                                    │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p no-preload-187607 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-756339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:45:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:45:01.745123  329090 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:01.745432  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745440  329090 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:01.745446  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745739  329090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:45:01.746375  329090 out.go:368] Setting JSON to false
	I1123 08:45:01.748064  329090 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5249,"bootTime":1763882253,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:45:01.748157  329090 start.go:143] virtualization: kvm guest
	I1123 08:45:01.750156  329090 out.go:179] * [embed-certs-756339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:45:01.753393  329090 notify.go:221] Checking for updates...
	I1123 08:45:01.753398  329090 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:45:01.755146  329090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:45:01.756598  329090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:01.757836  329090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:45:01.758954  329090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:45:01.760360  329090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:45:01.765276  329090 config.go:182] Loaded profile config "default-k8s-diff-port-726261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765522  329090 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765681  329090 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:45:01.765827  329090 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:45:01.800644  329090 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:45:01.801313  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.871017  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.860213573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.871190  329090 docker.go:319] overlay module found
	I1123 08:45:01.872879  329090 out.go:179] * Using the docker driver based on user configuration
	I1123 08:45:01.874146  329090 start.go:309] selected driver: docker
	I1123 08:45:01.874172  329090 start.go:927] validating driver "docker" against <nil>
	I1123 08:45:01.874185  329090 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:45:01.874731  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.950283  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.938442114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.950526  329090 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:45:01.950805  329090 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.952251  329090 out.go:179] * Using Docker driver with root privileges
	I1123 08:45:01.953421  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:01.953493  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:01.953508  329090 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:45:01.953584  329090 start.go:353] cluster config:
	{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:01.954827  329090 out.go:179] * Starting "embed-certs-756339" primary control-plane node in "embed-certs-756339" cluster
	I1123 08:45:01.955848  329090 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:45:01.957107  329090 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:45:01.958365  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:01.958393  329090 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:45:01.958408  329090 cache.go:65] Caching tarball of preloaded images
	I1123 08:45:01.958465  329090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:45:01.958507  329090 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:45:01.958523  329090 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:45:01.958635  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:01.958661  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json: {Name:mk2bf238bbe57398e8f0e67e0ff345b4c996e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:01.983475  329090 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:45:01.983497  329090 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:45:01.983513  329090 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:45:01.983540  329090 start.go:360] acquireMachinesLock for embed-certs-756339: {Name:mk2607c5ea38ca6bd330e0a548b36202f67f84a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:01.983642  329090 start.go:364] duration metric: took 84.653µs to acquireMachinesLock for "embed-certs-756339"
	I1123 08:45:01.983672  329090 start.go:93] Provisioning new machine with config: &{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:01.983792  329090 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:45:01.986901  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.692445857s)
	I1123 08:45:01.987002  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.670756175s)
	I1123 08:45:01.987136  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.507320621s)
	I1123 08:45:01.987186  323816 api_server.go:72] duration metric: took 2.902108336s to wait for apiserver process to appear ...
	I1123 08:45:01.987204  323816 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:01.987282  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:01.988808  323816 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-187607 addons enable metrics-server
	
	I1123 08:45:01.992707  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:45:01.992732  323816 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:45:01.994529  323816 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 08:45:01.422757  323135 addons.go:530] duration metric: took 3.555416147s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:01.910007  323135 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:45:01.915784  323135 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:45:01.917062  323135 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:01.917089  323135 api_server.go:131] duration metric: took 507.92158ms to wait for apiserver health ...
	I1123 08:45:01.917100  323135 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:01.921785  323135 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:01.921998  323135 system_pods.go:61] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.922039  323135 system_pods.go:61] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.922068  323135 system_pods.go:61] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.922079  323135 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.922087  323135 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.922095  323135 system_pods.go:61] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.922107  323135 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.922115  323135 system_pods.go:61] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.922124  323135 system_pods.go:74] duration metric: took 5.016936ms to wait for pod list to return data ...
	I1123 08:45:01.922189  323135 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:01.925409  323135 default_sa.go:45] found service account: "default"
	I1123 08:45:01.925452  323135 default_sa.go:55] duration metric: took 3.245595ms for default service account to be created ...
	I1123 08:45:01.925463  323135 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:01.931804  323135 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:01.931872  323135 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.931898  323135 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.931961  323135 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.931995  323135 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.932018  323135 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.932037  323135 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.932066  323135 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.932076  323135 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.932086  323135 system_pods.go:126] duration metric: took 6.61665ms to wait for k8s-apps to be running ...
	I1123 08:45:01.932097  323135 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:01.932143  323135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:01.947263  323135 system_svc.go:56] duration metric: took 15.160659ms WaitForService to wait for kubelet
	I1123 08:45:01.947298  323135 kubeadm.go:587] duration metric: took 4.08017724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.947325  323135 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:01.950481  323135 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:01.950509  323135 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:01.950526  323135 node_conditions.go:105] duration metric: took 3.194245ms to run NodePressure ...
	I1123 08:45:01.950541  323135 start.go:242] waiting for startup goroutines ...
	I1123 08:45:01.950555  323135 start.go:247] waiting for cluster config update ...
	I1123 08:45:01.950571  323135 start.go:256] writing updated cluster config ...
	I1123 08:45:01.950876  323135 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:01.955038  323135 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:01.958449  323135 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:03.965246  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:01.995584  323816 addons.go:530] duration metric: took 2.910424664s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:02.487321  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:02.491678  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:45:02.492738  323816 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:02.492762  323816 api_server.go:131] duration metric: took 505.498506ms to wait for apiserver health ...
	I1123 08:45:02.492770  323816 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:02.496254  323816 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:02.496282  323816 system_pods.go:61] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.496290  323816 system_pods.go:61] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.496296  323816 system_pods.go:61] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.496302  323816 system_pods.go:61] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.496310  323816 system_pods.go:61] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.496317  323816 system_pods.go:61] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.496324  323816 system_pods.go:61] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.496334  323816 system_pods.go:61] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.496340  323816 system_pods.go:74] duration metric: took 3.565076ms to wait for pod list to return data ...
	I1123 08:45:02.496348  323816 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:02.498409  323816 default_sa.go:45] found service account: "default"
	I1123 08:45:02.498426  323816 default_sa.go:55] duration metric: took 2.073405ms for default service account to be created ...
	I1123 08:45:02.498434  323816 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:02.500853  323816 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.500888  323816 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.500899  323816 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.500912  323816 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.500929  323816 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.500941  323816 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.500951  323816 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.500961  323816 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.500971  323816 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.500978  323816 system_pods.go:126] duration metric: took 2.538671ms to wait for k8s-apps to be running ...
	I1123 08:45:02.500991  323816 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:02.501036  323816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:02.522199  323816 system_svc.go:56] duration metric: took 21.201972ms WaitForService to wait for kubelet
	I1123 08:45:02.522225  323816 kubeadm.go:587] duration metric: took 3.437147085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:02.522246  323816 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:02.524870  323816 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:02.524905  323816 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:02.524925  323816 node_conditions.go:105] duration metric: took 2.673388ms to run NodePressure ...
	I1123 08:45:02.524943  323816 start.go:242] waiting for startup goroutines ...
	I1123 08:45:02.524953  323816 start.go:247] waiting for cluster config update ...
	I1123 08:45:02.524970  323816 start.go:256] writing updated cluster config ...
	I1123 08:45:02.525241  323816 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:02.529440  323816 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:02.532956  323816 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:04.545550  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:01.985817  329090 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:45:01.986054  329090 start.go:159] libmachine.API.Create for "embed-certs-756339" (driver="docker")
	I1123 08:45:01.986094  329090 client.go:173] LocalClient.Create starting
	I1123 08:45:01.986158  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 08:45:01.986202  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986228  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986299  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 08:45:01.986331  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986349  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986747  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:45:02.006351  329090 cli_runner.go:211] docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:45:02.006428  329090 network_create.go:284] running [docker network inspect embed-certs-756339] to gather additional debugging logs...
	I1123 08:45:02.006453  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339
	W1123 08:45:02.024029  329090 cli_runner.go:211] docker network inspect embed-certs-756339 returned with exit code 1
	I1123 08:45:02.024056  329090 network_create.go:287] error running [docker network inspect embed-certs-756339]: docker network inspect embed-certs-756339: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-756339 not found
	I1123 08:45:02.024076  329090 network_create.go:289] output of [docker network inspect embed-certs-756339]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-756339 not found
	
	** /stderr **
	I1123 08:45:02.024188  329090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:02.041589  329090 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0e05b954e81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:02:f0:06:d5:34} reservation:<nil>}
	I1123 08:45:02.042147  329090 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2604e536ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:ab:00:4e:41:e6} reservation:<nil>}
	I1123 08:45:02.042884  329090 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce97320dd675 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:5a:a5:0b:c0:b0} reservation:<nil>}
	I1123 08:45:02.043340  329090 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c80b7bca17a7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:f1:41:59:09:b5} reservation:<nil>}
	I1123 08:45:02.043937  329090 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8e58961f3024 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b6:f0:e4:3c:63:d5} reservation:<nil>}
	I1123 08:45:02.044437  329090 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-e4a86ee726da IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ae:37:bc:fe:9d:3a} reservation:<nil>}
	I1123 08:45:02.045221  329090 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06cd0}
	I1123 08:45:02.045242  329090 network_create.go:124] attempt to create docker network embed-certs-756339 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1123 08:45:02.045287  329090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-756339 embed-certs-756339
	I1123 08:45:02.095267  329090 network_create.go:108] docker network embed-certs-756339 192.168.103.0/24 created
	I1123 08:45:02.095296  329090 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-756339" container
	I1123 08:45:02.095350  329090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:45:02.111533  329090 cli_runner.go:164] Run: docker volume create embed-certs-756339 --label name.minikube.sigs.k8s.io=embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:45:02.128824  329090 oci.go:103] Successfully created a docker volume embed-certs-756339
	I1123 08:45:02.128896  329090 cli_runner.go:164] Run: docker run --rm --name embed-certs-756339-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --entrypoint /usr/bin/test -v embed-certs-756339:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:45:02.559029  329090 oci.go:107] Successfully prepared a docker volume embed-certs-756339
	I1123 08:45:02.559098  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:02.559108  329090 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:45:02.559163  329090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 08:45:06.464312  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:08.466215  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:06.707246  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:09.040137  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:11.046122  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:07.131448  329090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.572224972s)
	I1123 08:45:07.131484  329090 kic.go:203] duration metric: took 4.572370498s to extract preloaded images to volume ...
	W1123 08:45:07.131573  329090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:45:07.131616  329090 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:45:07.131860  329090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:45:07.219659  329090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-756339 --name embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-756339 --network embed-certs-756339 --ip 192.168.103.2 --volume embed-certs-756339:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:45:07.635482  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Running}}
	I1123 08:45:07.658965  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.681327  329090 cli_runner.go:164] Run: docker exec embed-certs-756339 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:45:07.737769  329090 oci.go:144] the created container "embed-certs-756339" has a running status.
	I1123 08:45:07.737802  329090 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa...
	I1123 08:45:07.895228  329090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:45:07.935222  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.958382  329090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:45:07.958405  329090 kic_runner.go:114] Args: [docker exec --privileged embed-certs-756339 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:45:08.015520  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:08.039803  329090 machine.go:94] provisionDockerMachine start ...
	I1123 08:45:08.039898  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:08.064345  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:08.064680  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:08.064723  329090 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:45:08.065347  329090 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47890->127.0.0.1:33131: read: connection reset by peer
	I1123 08:45:11.244730  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.244755  329090 ubuntu.go:182] provisioning hostname "embed-certs-756339"
	I1123 08:45:11.244812  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.273763  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.274055  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.274072  329090 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-756339 && echo "embed-certs-756339" | sudo tee /etc/hostname
	I1123 08:45:11.457570  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.457714  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.488146  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.488457  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.488485  329090 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-756339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-756339/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-756339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:45:11.660198  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:45:11.660362  329090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:45:11.660453  329090 ubuntu.go:190] setting up certificates
	I1123 08:45:11.660471  329090 provision.go:84] configureAuth start
	I1123 08:45:11.661011  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:11.684982  329090 provision.go:143] copyHostCerts
	I1123 08:45:11.685043  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:45:11.685053  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:45:11.685140  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:45:11.685249  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:45:11.685255  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:45:11.685292  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:45:11.685383  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:45:11.685391  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:45:11.685427  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:45:11.685506  329090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.embed-certs-756339 san=[127.0.0.1 192.168.103.2 embed-certs-756339 localhost minikube]
	I1123 08:45:11.758697  329090 provision.go:177] copyRemoteCerts
	I1123 08:45:11.758777  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:45:11.758833  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.787179  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:11.905965  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:45:11.934744  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:45:11.961707  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:45:11.985963  329090 provision.go:87] duration metric: took 325.479379ms to configureAuth
	I1123 08:45:11.985992  329090 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:45:11.986220  329090 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:11.986358  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.011499  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:12.011833  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:12.011872  329090 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:45:12.373361  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:45:12.373388  329090 machine.go:97] duration metric: took 4.333562614s to provisionDockerMachine
	I1123 08:45:12.373402  329090 client.go:176] duration metric: took 10.387301049s to LocalClient.Create
	I1123 08:45:12.373431  329090 start.go:167] duration metric: took 10.387376613s to libmachine.API.Create "embed-certs-756339"
	I1123 08:45:12.373444  329090 start.go:293] postStartSetup for "embed-certs-756339" (driver="docker")
	I1123 08:45:12.373458  329090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:45:12.373521  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:45:12.373575  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.394472  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.505303  329090 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:45:12.509881  329090 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:45:12.509946  329090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:45:12.509962  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:45:12.510025  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:45:12.510127  329090 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:45:12.510256  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:45:12.520339  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:45:12.547586  329090 start.go:296] duration metric: took 174.127267ms for postStartSetup
	I1123 08:45:12.548040  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.572325  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:12.572597  329090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:45:12.572652  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.595241  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.708576  329090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:45:12.713786  329090 start.go:128] duration metric: took 10.729979645s to createHost
	I1123 08:45:12.713812  329090 start.go:83] releasing machines lock for "embed-certs-756339", held for 10.730153164s
	I1123 08:45:12.713888  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.744434  329090 ssh_runner.go:195] Run: cat /version.json
	I1123 08:45:12.744496  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.744678  329090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:45:12.744776  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.771659  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.771722  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.970377  329090 ssh_runner.go:195] Run: systemctl --version
	I1123 08:45:12.980003  329090 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:45:13.031076  329090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:45:13.037986  329090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:45:13.038091  329090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:45:13.078655  329090 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:45:13.078678  329090 start.go:496] detecting cgroup driver to use...
	I1123 08:45:13.078778  329090 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:45:13.078826  329090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:45:13.102501  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:45:13.121011  329090 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:45:13.121088  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:45:13.144025  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:45:13.166610  329090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:45:13.266885  329090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:45:13.383738  329090 docker.go:234] disabling docker service ...
	I1123 08:45:13.383808  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:45:13.408902  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:45:13.425055  329090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:45:13.533375  329090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:45:13.641970  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:45:13.655349  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:45:13.672802  329090 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:45:13.672859  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.682619  329090 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:45:13.682671  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.691340  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.700633  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.709880  329090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:45:13.717844  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.726872  329090 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.741035  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.750011  329090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:45:13.757738  329090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:45:13.764834  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:13.846176  329090 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:45:15.041719  329090 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.195506975s)
	I1123 08:45:15.041743  329090 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:45:15.041806  329090 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:45:15.046071  329090 start.go:564] Will wait 60s for crictl version
	I1123 08:45:15.046136  329090 ssh_runner.go:195] Run: which crictl
	I1123 08:45:15.049573  329090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:45:15.078843  329090 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:45:15.078920  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.108962  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.139712  329090 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 08:45:10.968346  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.466785  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.540283  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:16.038123  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:15.141197  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:15.159501  329090 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 08:45:15.163431  329090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:15.173476  329090 kubeadm.go:884] updating cluster {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:45:15.173575  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:15.173616  329090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:15.210172  329090 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:45:15.210193  329090 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:45:15.210244  329090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:15.237085  329090 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:45:15.237104  329090 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:45:15.237113  329090 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1123 08:45:15.237217  329090 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-756339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:45:15.237295  329090 ssh_runner.go:195] Run: crio config
	I1123 08:45:15.283601  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:15.283625  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:15.283643  329090 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:45:15.283669  329090 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-756339 NodeName:embed-certs-756339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:45:15.283837  329090 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-756339"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:45:15.283904  329090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:45:15.292504  329090 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:45:15.292566  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:45:15.300378  329090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 08:45:15.312974  329090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:45:15.327882  329090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1123 08:45:15.340181  329090 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:45:15.343646  329090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:15.354110  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:15.443097  329090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:15.467751  329090 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339 for IP: 192.168.103.2
	I1123 08:45:15.467775  329090 certs.go:195] generating shared ca certs ...
	I1123 08:45:15.467794  329090 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.467944  329090 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:45:15.468013  329090 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:45:15.468026  329090 certs.go:257] generating profile certs ...
	I1123 08:45:15.468092  329090 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key
	I1123 08:45:15.468108  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt with IP's: []
	I1123 08:45:15.681556  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt ...
	I1123 08:45:15.681578  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt: {Name:mk22797cd88ef1f778f787e25af3588a79d11855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.681755  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key ...
	I1123 08:45:15.681771  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key: {Name:mk2507e79a5f05fa7cb11db2054cd014292902df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.681880  329090 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354
	I1123 08:45:15.681896  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1123 08:45:15.727484  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 ...
	I1123 08:45:15.727506  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354: {Name:mkade0e3ba918afced6504828d64527edcb7e06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.727677  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354 ...
	I1123 08:45:15.727718  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354: {Name:mke39adf49845e1231f060e2780420238d4a87bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.727834  329090 certs.go:382] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt
	I1123 08:45:15.727927  329090 certs.go:386] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key
	I1123 08:45:15.728008  329090 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key
	I1123 08:45:15.728025  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt with IP's: []
	I1123 08:45:15.834669  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt ...
	I1123 08:45:15.834720  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt: {Name:mkad5e6304235e6d8f0ebd086b0ccf458022d6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.834861  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key ...
	I1123 08:45:15.834879  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key: {Name:mka603d9600779233619dbc354e88b03aa5d1f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.835045  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:45:15.835081  329090 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:45:15.835092  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:45:15.835118  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:45:15.835142  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:45:15.835178  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:45:15.835218  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:45:15.835729  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:45:15.855139  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:45:15.873868  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:45:15.894547  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:45:15.912933  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:45:15.930981  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:45:15.949401  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:45:15.970429  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:45:15.989205  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:45:16.008793  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:45:16.025737  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:45:16.043175  329090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:45:16.055931  329090 ssh_runner.go:195] Run: openssl version
	I1123 08:45:16.061639  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:45:16.069652  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.073176  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.073220  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.108921  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:45:16.116885  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:45:16.124882  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.128591  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.128656  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.185316  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:45:16.195245  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:45:16.206667  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.211327  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.211374  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.251180  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:45:16.260175  329090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:45:16.264022  329090 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:45:16.264083  329090 kubeadm.go:401] StartCluster: {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:16.264171  329090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:45:16.264218  329090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:45:16.292235  329090 cri.go:89] found id: ""
	I1123 08:45:16.292292  329090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:45:16.300794  329090 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:45:16.308741  329090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:45:16.308794  329090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:45:16.316404  329090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:45:16.316422  329090 kubeadm.go:158] found existing configuration files:
	
	I1123 08:45:16.316458  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:45:16.324309  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:45:16.324349  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:45:16.332260  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:45:16.340786  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:45:16.340842  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:45:16.348658  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:45:16.358536  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:45:16.358583  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:45:16.368595  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:45:16.377891  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:45:16.377952  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:45:16.386029  329090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:45:16.424131  329090 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:45:16.424226  329090 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:45:16.444456  329090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:45:16.444527  329090 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:45:16.444572  329090 kubeadm.go:319] OS: Linux
	I1123 08:45:16.444654  329090 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:45:16.444763  329090 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:45:16.444824  329090 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:45:16.444916  329090 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:45:16.444986  329090 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:45:16.445059  329090 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:45:16.445128  329090 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:45:16.445197  329090 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:45:16.502432  329090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:45:16.502566  329090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:45:16.502717  329090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:45:16.512573  329090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:45:16.514857  329090 out.go:252]   - Generating certificates and keys ...
	I1123 08:45:16.514990  329090 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:45:16.515094  329090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:45:16.608081  329090 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:45:16.680528  329090 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:45:16.801156  329090 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:45:17.144723  329090 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:45:17.391838  329090 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:45:17.392042  329090 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-756339 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:45:17.447222  329090 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:45:17.447383  329090 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-756339 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:45:17.644625  329090 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:45:17.916674  329090 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:45:18.538498  329090 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:45:18.538728  329090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:45:18.967277  329090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:45:19.377546  329090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:45:19.559622  329090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:45:20.075738  329090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:45:20.364836  329090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:45:20.365389  329090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:45:20.380029  329090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1123 08:45:15.964678  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:18.463898  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:18.038557  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:20.040142  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:20.381602  329090 out.go:252]   - Booting up control plane ...
	I1123 08:45:20.381763  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:45:20.381900  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:45:20.382610  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:45:20.395865  329090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:45:20.396015  329090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:45:20.402081  329090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:45:20.402378  329090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:45:20.402436  329090 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:45:20.508331  329090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:45:20.508495  329090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:45:22.009994  329090 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501781773s
	I1123 08:45:22.014389  329090 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:45:22.014519  329090 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1123 08:45:22.014637  329090 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:45:22.014773  329090 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:45:23.091748  329090 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.077310791s
	I1123 08:45:23.589008  329090 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.574535055s
	I1123 08:45:25.015461  329090 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001048624s
	I1123 08:45:25.026445  329090 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:45:25.036344  329090 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:45:25.045136  329090 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:45:25.045341  329090 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-756339 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:45:25.052213  329090 kubeadm.go:319] [bootstrap-token] Using token: jh7osp.28agjpkabxiw65fh
	W1123 08:45:20.963406  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:22.964352  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:22.538516  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:24.539132  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:25.055029  329090 out.go:252]   - Configuring RBAC rules ...
	I1123 08:45:25.055175  329090 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:45:25.058117  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:45:25.062975  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:45:25.066360  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:45:25.069196  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:45:25.071492  329090 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:45:25.419913  329090 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:45:25.836463  329090 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:45:26.420358  329090 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:45:26.421135  329090 kubeadm.go:319] 
	I1123 08:45:26.421252  329090 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:45:26.421277  329090 kubeadm.go:319] 
	I1123 08:45:26.421378  329090 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:45:26.421390  329090 kubeadm.go:319] 
	I1123 08:45:26.421426  329090 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:45:26.421521  329090 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:45:26.421603  329090 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:45:26.421620  329090 kubeadm.go:319] 
	I1123 08:45:26.421735  329090 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:45:26.421746  329090 kubeadm.go:319] 
	I1123 08:45:26.421806  329090 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:45:26.421815  329090 kubeadm.go:319] 
	I1123 08:45:26.421881  329090 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:45:26.421994  329090 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:45:26.422098  329090 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:45:26.422107  329090 kubeadm.go:319] 
	I1123 08:45:26.422206  329090 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:45:26.422316  329090 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:45:26.422325  329090 kubeadm.go:319] 
	I1123 08:45:26.422429  329090 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jh7osp.28agjpkabxiw65fh \
	I1123 08:45:26.422527  329090 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c \
	I1123 08:45:26.422562  329090 kubeadm.go:319] 	--control-plane 
	I1123 08:45:26.422571  329090 kubeadm.go:319] 
	I1123 08:45:26.422711  329090 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:45:26.422722  329090 kubeadm.go:319] 
	I1123 08:45:26.422841  329090 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jh7osp.28agjpkabxiw65fh \
	I1123 08:45:26.422947  329090 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c 
	I1123 08:45:26.425509  329090 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:45:26.425638  329090 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:45:26.425665  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:26.425679  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:26.427041  329090 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:45:26.427891  329090 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:45:26.432307  329090 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:45:26.432326  329090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:45:26.445364  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:45:26.642490  329090 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:45:26.642551  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:26.642592  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-756339 minikube.k8s.io/updated_at=2025_11_23T08_45_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=embed-certs-756339 minikube.k8s.io/primary=true
	I1123 08:45:26.729263  329090 ops.go:34] apiserver oom_adj: -16
	I1123 08:45:26.729393  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 08:45:25.464467  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:27.964097  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:26.539240  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:29.038507  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:27.229843  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:27.730298  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:28.230009  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:28.730490  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:29.229984  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:29.730299  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:30.229522  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:30.729582  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:31.230290  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:31.293892  329090 kubeadm.go:1114] duration metric: took 4.651396638s to wait for elevateKubeSystemPrivileges
	I1123 08:45:31.293931  329090 kubeadm.go:403] duration metric: took 15.029851328s to StartCluster
	I1123 08:45:31.293953  329090 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:31.294038  329090 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:31.295585  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:31.295872  329090 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:31.295936  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:45:31.296007  329090 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:45:31.296114  329090 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-756339"
	I1123 08:45:31.296118  329090 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:31.296134  329090 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-756339"
	I1123 08:45:31.296128  329090 addons.go:70] Setting default-storageclass=true in profile "embed-certs-756339"
	I1123 08:45:31.296166  329090 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-756339"
	I1123 08:45:31.296176  329090 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:45:31.296604  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.296720  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.297232  329090 out.go:179] * Verifying Kubernetes components...
	I1123 08:45:31.299135  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:31.322679  329090 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:45:31.324511  329090 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:31.324536  329090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:45:31.324593  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:31.329451  329090 addons.go:239] Setting addon default-storageclass=true in "embed-certs-756339"
	I1123 08:45:31.329500  329090 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:45:31.330018  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.359473  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:31.359508  329090 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:31.359523  329090 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:45:31.359576  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:31.383150  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:31.400104  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:45:31.438850  329090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:31.477184  329090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:31.500079  329090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:31.590832  329090 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:45:31.592356  329090 node_ready.go:35] waiting up to 6m0s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:45:31.806094  329090 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 08:45:30.466331  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:32.963158  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:34.963993  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:31.541665  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:34.038345  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:31.807238  329090 addons.go:530] duration metric: took 511.238501ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:45:32.094332  329090 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-756339" context rescaled to 1 replicas
	W1123 08:45:33.595476  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:36.094914  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:37.463401  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:39.463744  323135 pod_ready.go:94] pod "coredns-66bc5c9577-8f8f5" is "Ready"
	I1123 08:45:39.463771  323135 pod_ready.go:86] duration metric: took 37.505301624s for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.466073  323135 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.469881  323135 pod_ready.go:94] pod "etcd-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.469907  323135 pod_ready.go:86] duration metric: took 3.813451ms for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.471783  323135 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.475591  323135 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.475615  323135 pod_ready.go:86] duration metric: took 3.808626ms for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.477543  323135 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.662072  323135 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.662095  323135 pod_ready.go:86] duration metric: took 184.532328ms for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.861972  323135 pod_ready.go:83] waiting for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.262090  323135 pod_ready.go:94] pod "kube-proxy-sn4sp" is "Ready"
	I1123 08:45:40.262116  323135 pod_ready.go:86] duration metric: took 400.120277ms for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.462054  323135 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.862186  323135 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:40.862212  323135 pod_ready.go:86] duration metric: took 400.136767ms for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.862222  323135 pod_ready.go:40] duration metric: took 38.907156113s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:40.906296  323135 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:40.908135  323135 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-726261" cluster and "default" namespace by default
	W1123 08:45:36.537535  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:38.537920  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:40.537903  323816 pod_ready.go:94] pod "coredns-66bc5c9577-khlrk" is "Ready"
	I1123 08:45:40.537927  323816 pod_ready.go:86] duration metric: took 38.004948026s for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.540197  323816 pod_ready.go:83] waiting for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.543594  323816 pod_ready.go:94] pod "etcd-no-preload-187607" is "Ready"
	I1123 08:45:40.543613  323816 pod_ready.go:86] duration metric: took 3.39504ms for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.545430  323816 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.548523  323816 pod_ready.go:94] pod "kube-apiserver-no-preload-187607" is "Ready"
	I1123 08:45:40.548540  323816 pod_ready.go:86] duration metric: took 3.086438ms for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.550144  323816 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.736784  323816 pod_ready.go:94] pod "kube-controller-manager-no-preload-187607" is "Ready"
	I1123 08:45:40.736810  323816 pod_ready.go:86] duration metric: took 186.650289ms for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.936965  323816 pod_ready.go:83] waiting for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:38.095893  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:40.595721  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	I1123 08:45:41.336483  323816 pod_ready.go:94] pod "kube-proxy-f9d8j" is "Ready"
	I1123 08:45:41.336508  323816 pod_ready.go:86] duration metric: took 399.518187ms for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.536451  323816 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.936068  323816 pod_ready.go:94] pod "kube-scheduler-no-preload-187607" is "Ready"
	I1123 08:45:41.936095  323816 pod_ready.go:86] duration metric: took 399.617585ms for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.936110  323816 pod_ready.go:40] duration metric: took 39.406642608s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:41.977753  323816 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:41.979147  323816 out.go:179] * Done! kubectl is now configured to use "no-preload-187607" cluster and "default" namespace by default
	I1123 08:45:43.095643  329090 node_ready.go:49] node "embed-certs-756339" is "Ready"
	I1123 08:45:43.095676  329090 node_ready.go:38] duration metric: took 11.503297149s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:45:43.095722  329090 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:45:43.095787  329090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:45:43.107848  329090 api_server.go:72] duration metric: took 11.811934824s to wait for apiserver process to appear ...
	I1123 08:45:43.107869  329090 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:43.107884  329090 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:45:43.112629  329090 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:45:43.113413  329090 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:43.113433  329090 api_server.go:131] duration metric: took 5.559653ms to wait for apiserver health ...
	I1123 08:45:43.113441  329090 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:43.116485  329090 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:43.116510  329090 system_pods.go:61] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.116515  329090 system_pods.go:61] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.116520  329090 system_pods.go:61] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.116525  329090 system_pods.go:61] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.116532  329090 system_pods.go:61] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.116536  329090 system_pods.go:61] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.116539  329090 system_pods.go:61] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.116545  329090 system_pods.go:61] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.116550  329090 system_pods.go:74] duration metric: took 3.105251ms to wait for pod list to return data ...
	I1123 08:45:43.116558  329090 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:43.118523  329090 default_sa.go:45] found service account: "default"
	I1123 08:45:43.118538  329090 default_sa.go:55] duration metric: took 1.974886ms for default service account to be created ...
	I1123 08:45:43.118545  329090 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:43.120780  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.120802  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.120810  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.120815  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.120819  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.120826  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.120831  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.120834  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.120839  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.120863  329090 retry.go:31] will retry after 215.602357ms: missing components: kube-dns
	I1123 08:45:43.340425  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.340455  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.340462  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.340467  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.340472  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.340477  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.340480  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.340483  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.340488  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.340504  329090 retry.go:31] will retry after 325.287893ms: missing components: kube-dns
	I1123 08:45:43.668913  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.668952  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.668962  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.668971  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.668977  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.668983  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.668987  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.668993  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.669002  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.669025  329090 retry.go:31] will retry after 462.937798ms: missing components: kube-dns
	I1123 08:45:44.135919  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:44.135950  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Running
	I1123 08:45:44.135957  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:44.135962  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:44.135967  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:44.135972  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:44.135977  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:44.135983  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:44.135988  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Running
	I1123 08:45:44.135997  329090 system_pods.go:126] duration metric: took 1.017446384s to wait for k8s-apps to be running ...
	I1123 08:45:44.136008  329090 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:44.136053  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:44.148387  329090 system_svc.go:56] duration metric: took 12.375192ms WaitForService to wait for kubelet
	I1123 08:45:44.148408  329090 kubeadm.go:587] duration metric: took 12.85249816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:44.148426  329090 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:44.150884  329090 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:44.150906  329090 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:44.150923  329090 node_conditions.go:105] duration metric: took 2.493335ms to run NodePressure ...
	I1123 08:45:44.150933  329090 start.go:242] waiting for startup goroutines ...
	I1123 08:45:44.150943  329090 start.go:247] waiting for cluster config update ...
	I1123 08:45:44.150953  329090 start.go:256] writing updated cluster config ...
	I1123 08:45:44.151188  329090 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:44.154964  329090 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:44.158442  329090 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.162122  329090 pod_ready.go:94] pod "coredns-66bc5c9577-ffmn2" is "Ready"
	I1123 08:45:44.162139  329090 pod_ready.go:86] duration metric: took 3.680173ms for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.163781  329090 pod_ready.go:83] waiting for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.167030  329090 pod_ready.go:94] pod "etcd-embed-certs-756339" is "Ready"
	I1123 08:45:44.167046  329090 pod_ready.go:86] duration metric: took 3.249458ms for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.168620  329090 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.171889  329090 pod_ready.go:94] pod "kube-apiserver-embed-certs-756339" is "Ready"
	I1123 08:45:44.171905  329090 pod_ready.go:86] duration metric: took 3.265991ms for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.173681  329090 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.558804  329090 pod_ready.go:94] pod "kube-controller-manager-embed-certs-756339" is "Ready"
	I1123 08:45:44.558838  329090 pod_ready.go:86] duration metric: took 385.124392ms for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.759793  329090 pod_ready.go:83] waiting for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.158864  329090 pod_ready.go:94] pod "kube-proxy-npnsh" is "Ready"
	I1123 08:45:45.158887  329090 pod_ready.go:86] duration metric: took 399.071703ms for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.360200  329090 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.758770  329090 pod_ready.go:94] pod "kube-scheduler-embed-certs-756339" is "Ready"
	I1123 08:45:45.758800  329090 pod_ready.go:86] duration metric: took 398.571969ms for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.758811  329090 pod_ready.go:40] duration metric: took 1.603821403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:45.800049  329090 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:45.802064  329090 out.go:179] * Done! kubectl is now configured to use "embed-certs-756339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:45:14 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:14.338614737Z" level=info msg="Started container" PID=1713 containerID=db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper id=002c8ea3-1a77-41b4-92c3-b10ce536bbb1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d5d3c567bada8d6c478ee2a6cf0140223d1118fc9328226eb923517a2bcd256c
	Nov 23 08:45:15 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:15.194476038Z" level=info msg="Removing container: 3c2b322e336bc6a6467ebdd9dc0c5441806dd7af030c146c481cf4ed0ad46183" id=b60fe394-0153-48e7-9e48-7b960f0e5305 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:15 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:15.20548823Z" level=info msg="Removed container 3c2b322e336bc6a6467ebdd9dc0c5441806dd7af030c146c481cf4ed0ad46183: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper" id=b60fe394-0153-48e7-9e48-7b960f0e5305 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.241677715Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=768ec900-ecfa-4ff0-a0f1-700de7911b4d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.242653127Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8db9ad94-1add-4cea-a634-f1fdc564930c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.24376899Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ec994658-c0cb-415c-ae94-a5ebeee151e8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.243892749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.248468834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.248660501Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4da7f54502e3b070fcef4e49f3e6ef4bf27b2f927b8d501944ddf60e77a4d237/merged/etc/passwd: no such file or directory"
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.248712242Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4da7f54502e3b070fcef4e49f3e6ef4bf27b2f927b8d501944ddf60e77a4d237/merged/etc/group: no such file or directory"
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.248996599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.276001089Z" level=info msg="Created container 4df2466c9bde6f0fd82a87d84de8b6e968bf33006410e1028c82880ce2aa8c70: kube-system/storage-provisioner/storage-provisioner" id=ec994658-c0cb-415c-ae94-a5ebeee151e8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.276432507Z" level=info msg="Starting container: 4df2466c9bde6f0fd82a87d84de8b6e968bf33006410e1028c82880ce2aa8c70" id=26d0eb19-eeaa-4c14-bc17-bb90ccf162b7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.278190339Z" level=info msg="Started container" PID=1729 containerID=4df2466c9bde6f0fd82a87d84de8b6e968bf33006410e1028c82880ce2aa8c70 description=kube-system/storage-provisioner/storage-provisioner id=26d0eb19-eeaa-4c14-bc17-bb90ccf162b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=404017157e4ba1ec9dc9b9d6188448955d3bf6d1e0b8df7faa11db0a7e03e767
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.098830563Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=11fb38a1-c0cb-4c52-bafc-9184f9d3880b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.099832874Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5c436a11-a135-4c0d-b5b6-3a28b2eccc8f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.100872094Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper" id=4135224e-5e73-4e3f-acc4-03d744269689 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.100989249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.106316175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.106935811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.135055604Z" level=info msg="Created container 03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper" id=4135224e-5e73-4e3f-acc4-03d744269689 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.135551706Z" level=info msg="Starting container: 03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a" id=ac965c79-82f9-4e42-b0d0-ed0b70d3652a name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.137350715Z" level=info msg="Started container" PID=1745 containerID=03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper id=ac965c79-82f9-4e42-b0d0-ed0b70d3652a name=/runtime.v1.RuntimeService/StartContainer sandboxID=d5d3c567bada8d6c478ee2a6cf0140223d1118fc9328226eb923517a2bcd256c
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.248989455Z" level=info msg="Removing container: db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221" id=8baa5ebf-436a-4497-b24d-01ca2722ef30 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.257361638Z" level=info msg="Removed container db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper" id=8baa5ebf-436a-4497-b24d-01ca2722ef30 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	03ce7818d9d50       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   d5d3c567bada8       dashboard-metrics-scraper-6ffb444bf9-tb8zk             kubernetes-dashboard
	4df2466c9bde6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   404017157e4ba       storage-provisioner                                    kube-system
	3def24cbd2530       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   519664773f608       kubernetes-dashboard-855c9754f9-fnxnm                  kubernetes-dashboard
	7ef10f196889c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   bccfb074b8fd4       coredns-66bc5c9577-8f8f5                               kube-system
	1da1b5290a3ae       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   3d83cd81ace8c       kindnet-4zwgv                                          kube-system
	07d7e34fb9e54       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   1ba72a1606cf3       busybox                                                default
	5570992f3d35e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   675782d12a9c4       kube-proxy-sn4sp                                       kube-system
	ccbc32cc46374       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   404017157e4ba       storage-provisioner                                    kube-system
	b50ab2696e1e6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   11d26d01fb374       kube-apiserver-default-k8s-diff-port-726261            kube-system
	246dd1e6858bd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   52b4d220d3516       etcd-default-k8s-diff-port-726261                      kube-system
	dc3d29d35b622       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   6e6bd8b7953e9       kube-controller-manager-default-k8s-diff-port-726261   kube-system
	460c9f8d4b064       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   92bffd075a7ee       kube-scheduler-default-k8s-diff-port-726261            kube-system
	
	
	==> coredns [7ef10f196889c9d3ac8c5dd4fd68549002b4069ef13629eda75f63e301942ebf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46410 - 30024 "HINFO IN 8990126013200876076.5813043168634314061. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054902659s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-726261
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-726261
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=default-k8s-diff-port-726261
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:43:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-726261
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:45:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:45:51 +0000   Sun, 23 Nov 2025 08:43:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:45:51 +0000   Sun, 23 Nov 2025 08:43:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:45:51 +0000   Sun, 23 Nov 2025 08:43:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:45:51 +0000   Sun, 23 Nov 2025 08:44:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-726261
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                72a55ebb-5247-4a4a-aaf5-7a6c6d5788f6
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-8f8f5                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-default-k8s-diff-port-726261                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-4zwgv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-726261             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-726261    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-sn4sp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-726261             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-tb8zk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fnxnm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node default-k8s-diff-port-726261 event: Registered Node default-k8s-diff-port-726261 in Controller
	  Normal  NodeReady                97s                kubelet          Node default-k8s-diff-port-726261 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node default-k8s-diff-port-726261 event: Registered Node default-k8s-diff-port-726261 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [246dd1e6858bda7bf5fe65c5645c08c92b5d0ed231ce7b5b4abd97fac4802f8f] <==
	{"level":"warn","ts":"2025-11-23T08:44:59.754774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.763073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.779279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.793162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.803673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.812835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.823506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.835881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.845085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.856357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.871308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.879916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.889350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.905141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.918361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.930164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.950614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.960213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.969958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.977044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.996968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:00.004882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:00.015120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:00.096132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:45:06.988392Z","caller":"traceutil/trace.go:172","msg":"trace[1619490187] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"164.870295ms","start":"2025-11-23T08:45:06.823498Z","end":"2025-11-23T08:45:06.988368Z","steps":["trace[1619490187] 'process raft request'  (duration: 126.841847ms)","trace[1619490187] 'compare'  (duration: 37.650206ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:45:55 up  1:28,  0 user,  load average: 3.69, 3.74, 2.45
	Linux default-k8s-diff-port-726261 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1da1b5290a3ae07eb95a2f27c1a53ff4324a0646fd34b71680617d7f9aaaf8fb] <==
	I1123 08:45:01.792024       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:45:01.792334       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:45:01.792518       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:45:01.792535       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:45:01.792560       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:45:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:45:01.997378       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:45:01.997426       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:45:01.997439       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:45:01.997572       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:45:02.397809       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:45:02.397829       1 metrics.go:72] Registering metrics
	I1123 08:45:02.397881       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:11.998032       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:11.999041       1 main.go:301] handling current node
	I1123 08:45:22.000158       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:22.000234       1 main.go:301] handling current node
	I1123 08:45:31.997833       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:31.997875       1 main.go:301] handling current node
	I1123 08:45:41.998172       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:41.998332       1 main.go:301] handling current node
	I1123 08:45:51.997549       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:51.997588       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b50ab2696e1e67f0ac0d0181ec5963bc5d4a3d2b32af3b3f35daa7d47a58c5e9] <==
	I1123 08:45:00.681620       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:45:00.681625       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:45:00.681631       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:45:00.681891       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 08:45:00.681932       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:45:00.683278       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:45:00.689316       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 08:45:00.689368       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:45:00.690147       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 08:45:00.690163       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 08:45:00.693117       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 08:45:00.693149       1 policy_source.go:240] refreshing policies
	I1123 08:45:00.707581       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:45:00.722749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:45:01.050767       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:45:01.135162       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:45:01.156676       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:45:01.167846       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:45:01.175235       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:45:01.307287       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.219.111"}
	I1123 08:45:01.328317       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.247.40"}
	I1123 08:45:01.594001       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:45:04.269288       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:45:04.469968       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:45:04.667802       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [dc3d29d35b622d6b93507aea04eac9baab619145bad0ffc805501a72a5c213eb] <==
	I1123 08:45:04.066710       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:45:04.066751       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:45:04.066771       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:45:04.066640       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:45:04.066756       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:45:04.066807       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:45:04.066886       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:45:04.066990       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-726261"
	I1123 08:45:04.067083       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 08:45:04.067781       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:45:04.072047       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:45:04.072162       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:04.074296       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:04.079436       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:04.079451       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:45:04.079459       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:45:04.084274       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:45:04.087531       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:45:04.088749       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:04.091884       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:45:04.095190       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:45:04.097548       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:45:04.098836       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:45:04.116234       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:45:04.116262       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [5570992f3d35e1c9011ec15df9afc2ce9eba453be9b90083b5f4b396eab5dd4e] <==
	I1123 08:45:01.566671       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:45:01.629987       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:45:01.730482       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:45:01.730602       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:45:01.730882       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:45:01.757187       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:45:01.757252       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:45:01.765129       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:45:01.766547       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:45:01.766633       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:01.771454       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:45:01.773063       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:45:01.771959       1 config.go:200] "Starting service config controller"
	I1123 08:45:01.773458       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:45:01.771981       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:45:01.773707       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:45:01.773950       1 config.go:309] "Starting node config controller"
	I1123 08:45:01.774811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:45:01.774828       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:45:01.873545       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:45:01.873580       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:45:01.874731       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [460c9f8d4b0648fd809721216954ce6522f68a315fdb7c8fbacbaeb8288f1ffb] <==
	I1123 08:44:58.300301       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:45:00.701995       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:45:00.702085       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:00.710793       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:45:00.710833       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:45:00.710913       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:00.710924       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:00.710942       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:45:00.710953       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:45:00.711293       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:45:00.711654       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:45:00.811442       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 08:45:00.811482       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:00.811445       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:45:04 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:04.754625     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nbrq\" (UniqueName: \"kubernetes.io/projected/01fb6bc8-9147-4bd4-8515-54325b5f4163-kube-api-access-9nbrq\") pod \"kubernetes-dashboard-855c9754f9-fnxnm\" (UID: \"01fb6bc8-9147-4bd4-8515-54325b5f4163\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fnxnm"
	Nov 23 08:45:04 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:04.754679     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56kmw\" (UniqueName: \"kubernetes.io/projected/f35100bc-3f6c-4d3c-8cac-05619bb18cc5-kube-api-access-56kmw\") pod \"dashboard-metrics-scraper-6ffb444bf9-tb8zk\" (UID: \"f35100bc-3f6c-4d3c-8cac-05619bb18cc5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk"
	Nov 23 08:45:04 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:04.754724     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f35100bc-3f6c-4d3c-8cac-05619bb18cc5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-tb8zk\" (UID: \"f35100bc-3f6c-4d3c-8cac-05619bb18cc5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk"
	Nov 23 08:45:04 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:04.754875     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/01fb6bc8-9147-4bd4-8515-54325b5f4163-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fnxnm\" (UID: \"01fb6bc8-9147-4bd4-8515-54325b5f4163\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fnxnm"
	Nov 23 08:45:09 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:09.219977     715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 08:45:11 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:11.281147     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fnxnm" podStartSLOduration=1.869320657 podStartE2EDuration="7.28112092s" podCreationTimestamp="2025-11-23 08:45:04 +0000 UTC" firstStartedPulling="2025-11-23 08:45:05.233719919 +0000 UTC m=+8.257286568" lastFinishedPulling="2025-11-23 08:45:10.645520171 +0000 UTC m=+13.669086831" observedRunningTime="2025-11-23 08:45:11.202475341 +0000 UTC m=+14.226042010" watchObservedRunningTime="2025-11-23 08:45:11.28112092 +0000 UTC m=+14.304687589"
	Nov 23 08:45:14 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:14.188076     715 scope.go:117] "RemoveContainer" containerID="3c2b322e336bc6a6467ebdd9dc0c5441806dd7af030c146c481cf4ed0ad46183"
	Nov 23 08:45:15 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:15.193109     715 scope.go:117] "RemoveContainer" containerID="3c2b322e336bc6a6467ebdd9dc0c5441806dd7af030c146c481cf4ed0ad46183"
	Nov 23 08:45:15 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:15.193272     715 scope.go:117] "RemoveContainer" containerID="db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221"
	Nov 23 08:45:15 default-k8s-diff-port-726261 kubelet[715]: E1123 08:45:15.193492     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tb8zk_kubernetes-dashboard(f35100bc-3f6c-4d3c-8cac-05619bb18cc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk" podUID="f35100bc-3f6c-4d3c-8cac-05619bb18cc5"
	Nov 23 08:45:16 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:16.198602     715 scope.go:117] "RemoveContainer" containerID="db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221"
	Nov 23 08:45:16 default-k8s-diff-port-726261 kubelet[715]: E1123 08:45:16.198833     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tb8zk_kubernetes-dashboard(f35100bc-3f6c-4d3c-8cac-05619bb18cc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk" podUID="f35100bc-3f6c-4d3c-8cac-05619bb18cc5"
	Nov 23 08:45:22 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:22.315100     715 scope.go:117] "RemoveContainer" containerID="db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221"
	Nov 23 08:45:22 default-k8s-diff-port-726261 kubelet[715]: E1123 08:45:22.315301     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tb8zk_kubernetes-dashboard(f35100bc-3f6c-4d3c-8cac-05619bb18cc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk" podUID="f35100bc-3f6c-4d3c-8cac-05619bb18cc5"
	Nov 23 08:45:32 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:32.241340     715 scope.go:117] "RemoveContainer" containerID="ccbc32cc46374e63b5543296cbf640d9549909d2c0eceece3434d9968f9a5845"
	Nov 23 08:45:33 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:33.098340     715 scope.go:117] "RemoveContainer" containerID="db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221"
	Nov 23 08:45:33 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:33.247672     715 scope.go:117] "RemoveContainer" containerID="db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221"
	Nov 23 08:45:33 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:33.247880     715 scope.go:117] "RemoveContainer" containerID="03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a"
	Nov 23 08:45:33 default-k8s-diff-port-726261 kubelet[715]: E1123 08:45:33.248062     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tb8zk_kubernetes-dashboard(f35100bc-3f6c-4d3c-8cac-05619bb18cc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk" podUID="f35100bc-3f6c-4d3c-8cac-05619bb18cc5"
	Nov 23 08:45:42 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:42.314586     715 scope.go:117] "RemoveContainer" containerID="03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a"
	Nov 23 08:45:42 default-k8s-diff-port-726261 kubelet[715]: E1123 08:45:42.314848     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tb8zk_kubernetes-dashboard(f35100bc-3f6c-4d3c-8cac-05619bb18cc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk" podUID="f35100bc-3f6c-4d3c-8cac-05619bb18cc5"
	Nov 23 08:45:52 default-k8s-diff-port-726261 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:45:53 default-k8s-diff-port-726261 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:45:53 default-k8s-diff-port-726261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 08:45:53 default-k8s-diff-port-726261 systemd[1]: kubelet.service: Consumed 1.663s CPU time.
	
	
	==> kubernetes-dashboard [3def24cbd2530ecf5a755e903ca77a95246cde5cba1a19043250f071201a1518] <==
	2025/11/23 08:45:10 Using namespace: kubernetes-dashboard
	2025/11/23 08:45:10 Using in-cluster config to connect to apiserver
	2025/11/23 08:45:10 Using secret token for csrf signing
	2025/11/23 08:45:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:45:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:45:10 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:45:10 Generating JWE encryption key
	2025/11/23 08:45:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:45:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:45:10 Initializing JWE encryption key from synchronized object
	2025/11/23 08:45:10 Creating in-cluster Sidecar client
	2025/11/23 08:45:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:45:10 Serving insecurely on HTTP port: 9090
	2025/11/23 08:45:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:45:10 Starting overwatch
	
	
	==> storage-provisioner [4df2466c9bde6f0fd82a87d84de8b6e968bf33006410e1028c82880ce2aa8c70] <==
	I1123 08:45:32.290047       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:45:32.296634       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:45:32.296671       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:32.298450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:35.752621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:40.012853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:43.610484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:46.664184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:49.685848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:49.690826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:49.690963       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:49.691033       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1459ed91-8156-4cda-ba23-7e39e4104244", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-726261_2837eaf0-83d0-4b26-844c-6b7636ea4d52 became leader
	I1123 08:45:49.691112       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-726261_2837eaf0-83d0-4b26-844c-6b7636ea4d52!
	W1123 08:45:49.692749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:49.696065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:49.791302       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-726261_2837eaf0-83d0-4b26-844c-6b7636ea4d52!
	W1123 08:45:51.698396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:51.703041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.706659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.711254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:55.715214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:55.719863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ccbc32cc46374e63b5543296cbf640d9549909d2c0eceece3434d9968f9a5845] <==
	I1123 08:45:01.486423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:45:31.489228       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261: exit status 2 (365.345813ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-726261 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-726261
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-726261:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387",
	        "Created": "2025-11-23T08:43:38.364416328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:44:50.667804948Z",
	            "FinishedAt": "2025-11-23T08:44:49.711995231Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/hostname",
	        "HostsPath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/hosts",
	        "LogPath": "/var/lib/docker/containers/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387/55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387-json.log",
	        "Name": "/default-k8s-diff-port-726261",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-726261:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-726261",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55c5a560eb124bfa65506f19a4683ef7407a2b31a1d64fbebda704b4bac6a387",
	                "LowerDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60f05dfc24e03f1be748b14d13c2bbd9f65dfe3cda01577133fe45d082a79e01/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-726261",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-726261/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-726261",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-726261",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-726261",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a2b8d0d77255d4c20bb4618e494392c10ec6841c5f07e8e595a7c649e69015b0",
	            "SandboxKey": "/var/run/docker/netns/a2b8d0d77255",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-726261": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8e58961f30240336633bec998e074fa68c1170ebe5fe0d36562f8ff59e516d42",
	                    "EndpointID": "9c28991b39a8f699eed76c94eb114497c8f7961eca4122f54a5ad0dc02f5935d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "52:4b:4b:82:83:df",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-726261",
	                        "55c5a560eb12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261: exit status 2 (363.908239ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-726261 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-726261 logs -n 25: (1.147318255s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p default-k8s-diff-port-726261 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p no-preload-187607 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p newest-cni-653361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-726261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ addons  │ enable dashboard -p no-preload-187607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ newest-cni-653361 image list --format=json                                                                                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ pause   │ -p newest-cni-653361 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-177890                                                                                                                                                                                                               │ disable-driver-mounts-177890 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-057894 image list --format=json                                                                                                                                                                                               │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-057894 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ delete  │ -p old-k8s-version-057894                                                                                                                                                                                                                     │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-057894                                                                                                                                                                                                                     │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ default-k8s-diff-port-726261 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p default-k8s-diff-port-726261 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ image   │ no-preload-187607 image list --format=json                                                                                                                                                                                                    │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p no-preload-187607 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-756339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ stop    │ -p embed-certs-756339 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:45:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:45:01.745123  329090 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:01.745432  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745440  329090 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:01.745446  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745739  329090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:45:01.746375  329090 out.go:368] Setting JSON to false
	I1123 08:45:01.748064  329090 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5249,"bootTime":1763882253,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:45:01.748157  329090 start.go:143] virtualization: kvm guest
	I1123 08:45:01.750156  329090 out.go:179] * [embed-certs-756339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:45:01.753393  329090 notify.go:221] Checking for updates...
	I1123 08:45:01.753398  329090 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:45:01.755146  329090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:45:01.756598  329090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:01.757836  329090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:45:01.758954  329090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:45:01.760360  329090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:45:01.765276  329090 config.go:182] Loaded profile config "default-k8s-diff-port-726261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765522  329090 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765681  329090 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:45:01.765827  329090 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:45:01.800644  329090 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:45:01.801313  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.871017  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.860213573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.871190  329090 docker.go:319] overlay module found
	I1123 08:45:01.872879  329090 out.go:179] * Using the docker driver based on user configuration
	I1123 08:45:01.874146  329090 start.go:309] selected driver: docker
	I1123 08:45:01.874172  329090 start.go:927] validating driver "docker" against <nil>
	I1123 08:45:01.874185  329090 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:45:01.874731  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.950283  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.938442114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.950526  329090 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:45:01.950805  329090 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.952251  329090 out.go:179] * Using Docker driver with root privileges
	I1123 08:45:01.953421  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:01.953493  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:01.953508  329090 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:45:01.953584  329090 start.go:353] cluster config:
	{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:01.954827  329090 out.go:179] * Starting "embed-certs-756339" primary control-plane node in "embed-certs-756339" cluster
	I1123 08:45:01.955848  329090 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:45:01.957107  329090 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:45:01.958365  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:01.958393  329090 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:45:01.958408  329090 cache.go:65] Caching tarball of preloaded images
	I1123 08:45:01.958465  329090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:45:01.958507  329090 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:45:01.958523  329090 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:45:01.958635  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:01.958661  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json: {Name:mk2bf238bbe57398e8f0e67e0ff345b4c996e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:01.983475  329090 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:45:01.983497  329090 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:45:01.983513  329090 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:45:01.983540  329090 start.go:360] acquireMachinesLock for embed-certs-756339: {Name:mk2607c5ea38ca6bd330e0a548b36202f67f84a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:01.983642  329090 start.go:364] duration metric: took 84.653µs to acquireMachinesLock for "embed-certs-756339"
	I1123 08:45:01.983672  329090 start.go:93] Provisioning new machine with config: &{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:01.983792  329090 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:45:01.986901  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.692445857s)
	I1123 08:45:01.987002  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.670756175s)
	I1123 08:45:01.987136  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.507320621s)
	I1123 08:45:01.987186  323816 api_server.go:72] duration metric: took 2.902108336s to wait for apiserver process to appear ...
	I1123 08:45:01.987204  323816 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:01.987282  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:01.988808  323816 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-187607 addons enable metrics-server
	
	I1123 08:45:01.992707  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:45:01.992732  323816 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:45:01.994529  323816 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 08:45:01.422757  323135 addons.go:530] duration metric: took 3.555416147s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:01.910007  323135 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:45:01.915784  323135 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:45:01.917062  323135 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:01.917089  323135 api_server.go:131] duration metric: took 507.92158ms to wait for apiserver health ...
	I1123 08:45:01.917100  323135 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:01.921785  323135 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:01.921998  323135 system_pods.go:61] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.922039  323135 system_pods.go:61] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.922068  323135 system_pods.go:61] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.922079  323135 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.922087  323135 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.922095  323135 system_pods.go:61] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.922107  323135 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.922115  323135 system_pods.go:61] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.922124  323135 system_pods.go:74] duration metric: took 5.016936ms to wait for pod list to return data ...
	I1123 08:45:01.922189  323135 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:01.925409  323135 default_sa.go:45] found service account: "default"
	I1123 08:45:01.925452  323135 default_sa.go:55] duration metric: took 3.245595ms for default service account to be created ...
	I1123 08:45:01.925463  323135 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:01.931804  323135 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:01.931872  323135 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.931898  323135 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.931961  323135 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.931995  323135 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.932018  323135 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.932037  323135 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.932066  323135 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.932076  323135 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.932086  323135 system_pods.go:126] duration metric: took 6.61665ms to wait for k8s-apps to be running ...
	I1123 08:45:01.932097  323135 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:01.932143  323135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:01.947263  323135 system_svc.go:56] duration metric: took 15.160659ms WaitForService to wait for kubelet
	I1123 08:45:01.947298  323135 kubeadm.go:587] duration metric: took 4.08017724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.947325  323135 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:01.950481  323135 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:01.950509  323135 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:01.950526  323135 node_conditions.go:105] duration metric: took 3.194245ms to run NodePressure ...
	I1123 08:45:01.950541  323135 start.go:242] waiting for startup goroutines ...
	I1123 08:45:01.950555  323135 start.go:247] waiting for cluster config update ...
	I1123 08:45:01.950571  323135 start.go:256] writing updated cluster config ...
	I1123 08:45:01.950876  323135 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:01.955038  323135 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:01.958449  323135 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:03.965246  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:01.995584  323816 addons.go:530] duration metric: took 2.910424664s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:02.487321  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:02.491678  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:45:02.492738  323816 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:02.492762  323816 api_server.go:131] duration metric: took 505.498506ms to wait for apiserver health ...
	I1123 08:45:02.492770  323816 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:02.496254  323816 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:02.496282  323816 system_pods.go:61] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.496290  323816 system_pods.go:61] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.496296  323816 system_pods.go:61] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.496302  323816 system_pods.go:61] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.496310  323816 system_pods.go:61] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.496317  323816 system_pods.go:61] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.496324  323816 system_pods.go:61] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.496334  323816 system_pods.go:61] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.496340  323816 system_pods.go:74] duration metric: took 3.565076ms to wait for pod list to return data ...
	I1123 08:45:02.496348  323816 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:02.498409  323816 default_sa.go:45] found service account: "default"
	I1123 08:45:02.498426  323816 default_sa.go:55] duration metric: took 2.073405ms for default service account to be created ...
	I1123 08:45:02.498434  323816 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:02.500853  323816 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.500888  323816 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.500899  323816 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.500912  323816 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.500929  323816 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.500941  323816 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.500951  323816 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.500961  323816 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.500971  323816 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.500978  323816 system_pods.go:126] duration metric: took 2.538671ms to wait for k8s-apps to be running ...
	I1123 08:45:02.500991  323816 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:02.501036  323816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:02.522199  323816 system_svc.go:56] duration metric: took 21.201972ms WaitForService to wait for kubelet
	I1123 08:45:02.522225  323816 kubeadm.go:587] duration metric: took 3.437147085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:02.522246  323816 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:02.524870  323816 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:02.524905  323816 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:02.524925  323816 node_conditions.go:105] duration metric: took 2.673388ms to run NodePressure ...
	I1123 08:45:02.524943  323816 start.go:242] waiting for startup goroutines ...
	I1123 08:45:02.524953  323816 start.go:247] waiting for cluster config update ...
	I1123 08:45:02.524970  323816 start.go:256] writing updated cluster config ...
	I1123 08:45:02.525241  323816 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:02.529440  323816 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:02.532956  323816 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:04.545550  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:01.985817  329090 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:45:01.986054  329090 start.go:159] libmachine.API.Create for "embed-certs-756339" (driver="docker")
	I1123 08:45:01.986094  329090 client.go:173] LocalClient.Create starting
	I1123 08:45:01.986158  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 08:45:01.986202  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986228  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986299  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 08:45:01.986331  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986349  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986747  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:45:02.006351  329090 cli_runner.go:211] docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:45:02.006428  329090 network_create.go:284] running [docker network inspect embed-certs-756339] to gather additional debugging logs...
	I1123 08:45:02.006453  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339
	W1123 08:45:02.024029  329090 cli_runner.go:211] docker network inspect embed-certs-756339 returned with exit code 1
	I1123 08:45:02.024056  329090 network_create.go:287] error running [docker network inspect embed-certs-756339]: docker network inspect embed-certs-756339: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-756339 not found
	I1123 08:45:02.024076  329090 network_create.go:289] output of [docker network inspect embed-certs-756339]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-756339 not found
	
	** /stderr **
	I1123 08:45:02.024188  329090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:02.041589  329090 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0e05b954e81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:02:f0:06:d5:34} reservation:<nil>}
	I1123 08:45:02.042147  329090 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2604e536ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:ab:00:4e:41:e6} reservation:<nil>}
	I1123 08:45:02.042884  329090 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce97320dd675 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:5a:a5:0b:c0:b0} reservation:<nil>}
	I1123 08:45:02.043340  329090 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c80b7bca17a7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:f1:41:59:09:b5} reservation:<nil>}
	I1123 08:45:02.043937  329090 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8e58961f3024 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b6:f0:e4:3c:63:d5} reservation:<nil>}
	I1123 08:45:02.044437  329090 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-e4a86ee726da IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ae:37:bc:fe:9d:3a} reservation:<nil>}
	I1123 08:45:02.045221  329090 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06cd0}
	I1123 08:45:02.045242  329090 network_create.go:124] attempt to create docker network embed-certs-756339 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1123 08:45:02.045287  329090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-756339 embed-certs-756339
	I1123 08:45:02.095267  329090 network_create.go:108] docker network embed-certs-756339 192.168.103.0/24 created
	I1123 08:45:02.095296  329090 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-756339" container
	I1123 08:45:02.095350  329090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:45:02.111533  329090 cli_runner.go:164] Run: docker volume create embed-certs-756339 --label name.minikube.sigs.k8s.io=embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:45:02.128824  329090 oci.go:103] Successfully created a docker volume embed-certs-756339
	I1123 08:45:02.128896  329090 cli_runner.go:164] Run: docker run --rm --name embed-certs-756339-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --entrypoint /usr/bin/test -v embed-certs-756339:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:45:02.559029  329090 oci.go:107] Successfully prepared a docker volume embed-certs-756339
	I1123 08:45:02.559098  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:02.559108  329090 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:45:02.559163  329090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 08:45:06.464312  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:08.466215  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:06.707246  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:09.040137  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:11.046122  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:07.131448  329090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.572224972s)
	I1123 08:45:07.131484  329090 kic.go:203] duration metric: took 4.572370498s to extract preloaded images to volume ...
	W1123 08:45:07.131573  329090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:45:07.131616  329090 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:45:07.131860  329090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:45:07.219659  329090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-756339 --name embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-756339 --network embed-certs-756339 --ip 192.168.103.2 --volume embed-certs-756339:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:45:07.635482  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Running}}
	I1123 08:45:07.658965  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.681327  329090 cli_runner.go:164] Run: docker exec embed-certs-756339 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:45:07.737769  329090 oci.go:144] the created container "embed-certs-756339" has a running status.
	I1123 08:45:07.737802  329090 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa...
	I1123 08:45:07.895228  329090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:45:07.935222  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.958382  329090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:45:07.958405  329090 kic_runner.go:114] Args: [docker exec --privileged embed-certs-756339 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:45:08.015520  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:08.039803  329090 machine.go:94] provisionDockerMachine start ...
	I1123 08:45:08.039898  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:08.064345  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:08.064680  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:08.064723  329090 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:45:08.065347  329090 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47890->127.0.0.1:33131: read: connection reset by peer
	I1123 08:45:11.244730  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.244755  329090 ubuntu.go:182] provisioning hostname "embed-certs-756339"
	I1123 08:45:11.244812  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.273763  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.274055  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.274072  329090 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-756339 && echo "embed-certs-756339" | sudo tee /etc/hostname
	I1123 08:45:11.457570  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.457714  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.488146  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.488457  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.488485  329090 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-756339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-756339/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-756339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:45:11.660198  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:45:11.660362  329090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:45:11.660453  329090 ubuntu.go:190] setting up certificates
	I1123 08:45:11.660471  329090 provision.go:84] configureAuth start
	I1123 08:45:11.661011  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:11.684982  329090 provision.go:143] copyHostCerts
	I1123 08:45:11.685043  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:45:11.685053  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:45:11.685140  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:45:11.685249  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:45:11.685255  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:45:11.685292  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:45:11.685383  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:45:11.685391  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:45:11.685427  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:45:11.685506  329090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.embed-certs-756339 san=[127.0.0.1 192.168.103.2 embed-certs-756339 localhost minikube]
	I1123 08:45:11.758697  329090 provision.go:177] copyRemoteCerts
	I1123 08:45:11.758777  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:45:11.758833  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.787179  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:11.905965  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:45:11.934744  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:45:11.961707  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:45:11.985963  329090 provision.go:87] duration metric: took 325.479379ms to configureAuth
	I1123 08:45:11.985992  329090 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:45:11.986220  329090 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:11.986358  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.011499  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:12.011833  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:12.011872  329090 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:45:12.373361  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:45:12.373388  329090 machine.go:97] duration metric: took 4.333562614s to provisionDockerMachine
	I1123 08:45:12.373402  329090 client.go:176] duration metric: took 10.387301049s to LocalClient.Create
	I1123 08:45:12.373431  329090 start.go:167] duration metric: took 10.387376613s to libmachine.API.Create "embed-certs-756339"
	I1123 08:45:12.373444  329090 start.go:293] postStartSetup for "embed-certs-756339" (driver="docker")
	I1123 08:45:12.373458  329090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:45:12.373521  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:45:12.373575  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.394472  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.505303  329090 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:45:12.509881  329090 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:45:12.509946  329090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:45:12.509962  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:45:12.510025  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:45:12.510127  329090 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:45:12.510256  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:45:12.520339  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:45:12.547586  329090 start.go:296] duration metric: took 174.127267ms for postStartSetup
	I1123 08:45:12.548040  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.572325  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:12.572597  329090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:45:12.572652  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.595241  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.708576  329090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:45:12.713786  329090 start.go:128] duration metric: took 10.729979645s to createHost
	I1123 08:45:12.713812  329090 start.go:83] releasing machines lock for "embed-certs-756339", held for 10.730153164s
	I1123 08:45:12.713888  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.744434  329090 ssh_runner.go:195] Run: cat /version.json
	I1123 08:45:12.744496  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.744678  329090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:45:12.744776  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.771659  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.771722  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.970377  329090 ssh_runner.go:195] Run: systemctl --version
	I1123 08:45:12.980003  329090 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:45:13.031076  329090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:45:13.037986  329090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:45:13.038091  329090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:45:13.078655  329090 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:45:13.078678  329090 start.go:496] detecting cgroup driver to use...
	I1123 08:45:13.078778  329090 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:45:13.078826  329090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:45:13.102501  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:45:13.121011  329090 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:45:13.121088  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:45:13.144025  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:45:13.166610  329090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:45:13.266885  329090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:45:13.383738  329090 docker.go:234] disabling docker service ...
	I1123 08:45:13.383808  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:45:13.408902  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:45:13.425055  329090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:45:13.533375  329090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:45:13.641970  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:45:13.655349  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:45:13.672802  329090 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:45:13.672859  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.682619  329090 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:45:13.682671  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.691340  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.700633  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.709880  329090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:45:13.717844  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.726872  329090 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.741035  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.750011  329090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:45:13.757738  329090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:45:13.764834  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:13.846176  329090 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:45:15.041719  329090 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.195506975s)
	I1123 08:45:15.041743  329090 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:45:15.041806  329090 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:45:15.046071  329090 start.go:564] Will wait 60s for crictl version
	I1123 08:45:15.046136  329090 ssh_runner.go:195] Run: which crictl
	I1123 08:45:15.049573  329090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:45:15.078843  329090 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:45:15.078920  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.108962  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.139712  329090 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 08:45:10.968346  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.466785  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.540283  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:16.038123  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:15.141197  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:15.159501  329090 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 08:45:15.163431  329090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:15.173476  329090 kubeadm.go:884] updating cluster {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:45:15.173575  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:15.173616  329090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:15.210172  329090 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:45:15.210193  329090 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:45:15.210244  329090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:15.237085  329090 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:45:15.237104  329090 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:45:15.237113  329090 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1123 08:45:15.237217  329090 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-756339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:45:15.237295  329090 ssh_runner.go:195] Run: crio config
	I1123 08:45:15.283601  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:15.283625  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:15.283643  329090 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:45:15.283669  329090 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-756339 NodeName:embed-certs-756339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:45:15.283837  329090 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-756339"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:45:15.283904  329090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:45:15.292504  329090 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:45:15.292566  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:45:15.300378  329090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 08:45:15.312974  329090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:45:15.327882  329090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1123 08:45:15.340181  329090 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:45:15.343646  329090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:15.354110  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:15.443097  329090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:15.467751  329090 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339 for IP: 192.168.103.2
	I1123 08:45:15.467775  329090 certs.go:195] generating shared ca certs ...
	I1123 08:45:15.467794  329090 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.467944  329090 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:45:15.468013  329090 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:45:15.468026  329090 certs.go:257] generating profile certs ...
	I1123 08:45:15.468092  329090 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key
	I1123 08:45:15.468108  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt with IP's: []
	I1123 08:45:15.681556  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt ...
	I1123 08:45:15.681578  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt: {Name:mk22797cd88ef1f778f787e25af3588a79d11855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.681755  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key ...
	I1123 08:45:15.681771  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key: {Name:mk2507e79a5f05fa7cb11db2054cd014292902df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.681880  329090 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354
	I1123 08:45:15.681896  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1123 08:45:15.727484  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 ...
	I1123 08:45:15.727506  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354: {Name:mkade0e3ba918afced6504828d64527edcb7e06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.727677  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354 ...
	I1123 08:45:15.727718  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354: {Name:mke39adf49845e1231f060e2780420238d4a87bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.727834  329090 certs.go:382] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt
	I1123 08:45:15.727927  329090 certs.go:386] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key
	I1123 08:45:15.728008  329090 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key
	I1123 08:45:15.728025  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt with IP's: []
	I1123 08:45:15.834669  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt ...
	I1123 08:45:15.834720  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt: {Name:mkad5e6304235e6d8f0ebd086b0ccf458022d6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.834861  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key ...
	I1123 08:45:15.834879  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key: {Name:mka603d9600779233619dbc354e88b03aa5d1f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.835045  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:45:15.835081  329090 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:45:15.835092  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:45:15.835118  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:45:15.835142  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:45:15.835178  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:45:15.835218  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:45:15.835729  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:45:15.855139  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:45:15.873868  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:45:15.894547  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:45:15.912933  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:45:15.930981  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:45:15.949401  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:45:15.970429  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:45:15.989205  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:45:16.008793  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:45:16.025737  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:45:16.043175  329090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:45:16.055931  329090 ssh_runner.go:195] Run: openssl version
	I1123 08:45:16.061639  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:45:16.069652  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.073176  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.073220  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.108921  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:45:16.116885  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:45:16.124882  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.128591  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.128656  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.185316  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:45:16.195245  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:45:16.206667  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.211327  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.211374  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.251180  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:45:16.260175  329090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:45:16.264022  329090 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:45:16.264083  329090 kubeadm.go:401] StartCluster: {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:16.264171  329090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:45:16.264218  329090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:45:16.292235  329090 cri.go:89] found id: ""
	I1123 08:45:16.292292  329090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:45:16.300794  329090 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:45:16.308741  329090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:45:16.308794  329090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:45:16.316404  329090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:45:16.316422  329090 kubeadm.go:158] found existing configuration files:
	
	I1123 08:45:16.316458  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:45:16.324309  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:45:16.324349  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:45:16.332260  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:45:16.340786  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:45:16.340842  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:45:16.348658  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:45:16.358536  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:45:16.358583  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:45:16.368595  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:45:16.377891  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:45:16.377952  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:45:16.386029  329090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:45:16.424131  329090 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:45:16.424226  329090 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:45:16.444456  329090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:45:16.444527  329090 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:45:16.444572  329090 kubeadm.go:319] OS: Linux
	I1123 08:45:16.444654  329090 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:45:16.444763  329090 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:45:16.444824  329090 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:45:16.444916  329090 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:45:16.444986  329090 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:45:16.445059  329090 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:45:16.445128  329090 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:45:16.445197  329090 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:45:16.502432  329090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:45:16.502566  329090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:45:16.502717  329090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:45:16.512573  329090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:45:16.514857  329090 out.go:252]   - Generating certificates and keys ...
	I1123 08:45:16.514990  329090 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:45:16.515094  329090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:45:16.608081  329090 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:45:16.680528  329090 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:45:16.801156  329090 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:45:17.144723  329090 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:45:17.391838  329090 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:45:17.392042  329090 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-756339 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:45:17.447222  329090 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:45:17.447383  329090 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-756339 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:45:17.644625  329090 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:45:17.916674  329090 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:45:18.538498  329090 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:45:18.538728  329090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:45:18.967277  329090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:45:19.377546  329090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:45:19.559622  329090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:45:20.075738  329090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:45:20.364836  329090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:45:20.365389  329090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:45:20.380029  329090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1123 08:45:15.964678  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:18.463898  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:18.038557  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:20.040142  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:20.381602  329090 out.go:252]   - Booting up control plane ...
	I1123 08:45:20.381763  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:45:20.381900  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:45:20.382610  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:45:20.395865  329090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:45:20.396015  329090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:45:20.402081  329090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:45:20.402378  329090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:45:20.402436  329090 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:45:20.508331  329090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:45:20.508495  329090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:45:22.009994  329090 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501781773s
	I1123 08:45:22.014389  329090 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:45:22.014519  329090 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1123 08:45:22.014637  329090 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:45:22.014773  329090 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:45:23.091748  329090 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.077310791s
	I1123 08:45:23.589008  329090 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.574535055s
	I1123 08:45:25.015461  329090 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001048624s
	I1123 08:45:25.026445  329090 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:45:25.036344  329090 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:45:25.045136  329090 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:45:25.045341  329090 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-756339 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:45:25.052213  329090 kubeadm.go:319] [bootstrap-token] Using token: jh7osp.28agjpkabxiw65fh
	W1123 08:45:20.963406  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:22.964352  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:22.538516  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:24.539132  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:25.055029  329090 out.go:252]   - Configuring RBAC rules ...
	I1123 08:45:25.055175  329090 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:45:25.058117  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:45:25.062975  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:45:25.066360  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:45:25.069196  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:45:25.071492  329090 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:45:25.419913  329090 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:45:25.836463  329090 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:45:26.420358  329090 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:45:26.421135  329090 kubeadm.go:319] 
	I1123 08:45:26.421252  329090 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:45:26.421277  329090 kubeadm.go:319] 
	I1123 08:45:26.421378  329090 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:45:26.421390  329090 kubeadm.go:319] 
	I1123 08:45:26.421426  329090 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:45:26.421521  329090 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:45:26.421603  329090 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:45:26.421620  329090 kubeadm.go:319] 
	I1123 08:45:26.421735  329090 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:45:26.421746  329090 kubeadm.go:319] 
	I1123 08:45:26.421806  329090 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:45:26.421815  329090 kubeadm.go:319] 
	I1123 08:45:26.421881  329090 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:45:26.421994  329090 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:45:26.422098  329090 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:45:26.422107  329090 kubeadm.go:319] 
	I1123 08:45:26.422206  329090 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:45:26.422316  329090 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:45:26.422325  329090 kubeadm.go:319] 
	I1123 08:45:26.422429  329090 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jh7osp.28agjpkabxiw65fh \
	I1123 08:45:26.422527  329090 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c \
	I1123 08:45:26.422562  329090 kubeadm.go:319] 	--control-plane 
	I1123 08:45:26.422571  329090 kubeadm.go:319] 
	I1123 08:45:26.422711  329090 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:45:26.422722  329090 kubeadm.go:319] 
	I1123 08:45:26.422841  329090 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jh7osp.28agjpkabxiw65fh \
	I1123 08:45:26.422947  329090 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c 
	I1123 08:45:26.425509  329090 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:45:26.425638  329090 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:45:26.425665  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:26.425679  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:26.427041  329090 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:45:26.427891  329090 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:45:26.432307  329090 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:45:26.432326  329090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:45:26.445364  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:45:26.642490  329090 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:45:26.642551  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:26.642592  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-756339 minikube.k8s.io/updated_at=2025_11_23T08_45_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=embed-certs-756339 minikube.k8s.io/primary=true
	I1123 08:45:26.729263  329090 ops.go:34] apiserver oom_adj: -16
	I1123 08:45:26.729393  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 08:45:25.464467  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:27.964097  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:26.539240  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:29.038507  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:27.229843  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:27.730298  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:28.230009  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:28.730490  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:29.229984  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:29.730299  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:30.229522  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:30.729582  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:31.230290  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:31.293892  329090 kubeadm.go:1114] duration metric: took 4.651396638s to wait for elevateKubeSystemPrivileges
	I1123 08:45:31.293931  329090 kubeadm.go:403] duration metric: took 15.029851328s to StartCluster
	I1123 08:45:31.293953  329090 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:31.294038  329090 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:31.295585  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:31.295872  329090 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:31.295936  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:45:31.296007  329090 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:45:31.296114  329090 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-756339"
	I1123 08:45:31.296118  329090 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:31.296134  329090 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-756339"
	I1123 08:45:31.296128  329090 addons.go:70] Setting default-storageclass=true in profile "embed-certs-756339"
	I1123 08:45:31.296166  329090 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-756339"
	I1123 08:45:31.296176  329090 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:45:31.296604  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.296720  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.297232  329090 out.go:179] * Verifying Kubernetes components...
	I1123 08:45:31.299135  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:31.322679  329090 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:45:31.324511  329090 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:31.324536  329090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:45:31.324593  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:31.329451  329090 addons.go:239] Setting addon default-storageclass=true in "embed-certs-756339"
	I1123 08:45:31.329500  329090 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:45:31.330018  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.359473  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:31.359508  329090 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:31.359523  329090 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:45:31.359576  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:31.383150  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:31.400104  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:45:31.438850  329090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:31.477184  329090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:31.500079  329090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:31.590832  329090 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:45:31.592356  329090 node_ready.go:35] waiting up to 6m0s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:45:31.806094  329090 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 08:45:30.466331  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:32.963158  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:34.963993  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:31.541665  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:34.038345  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:31.807238  329090 addons.go:530] duration metric: took 511.238501ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:45:32.094332  329090 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-756339" context rescaled to 1 replicas
	W1123 08:45:33.595476  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:36.094914  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:37.463401  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:39.463744  323135 pod_ready.go:94] pod "coredns-66bc5c9577-8f8f5" is "Ready"
	I1123 08:45:39.463771  323135 pod_ready.go:86] duration metric: took 37.505301624s for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.466073  323135 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.469881  323135 pod_ready.go:94] pod "etcd-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.469907  323135 pod_ready.go:86] duration metric: took 3.813451ms for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.471783  323135 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.475591  323135 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.475615  323135 pod_ready.go:86] duration metric: took 3.808626ms for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.477543  323135 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.662072  323135 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.662095  323135 pod_ready.go:86] duration metric: took 184.532328ms for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.861972  323135 pod_ready.go:83] waiting for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.262090  323135 pod_ready.go:94] pod "kube-proxy-sn4sp" is "Ready"
	I1123 08:45:40.262116  323135 pod_ready.go:86] duration metric: took 400.120277ms for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.462054  323135 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.862186  323135 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:40.862212  323135 pod_ready.go:86] duration metric: took 400.136767ms for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.862222  323135 pod_ready.go:40] duration metric: took 38.907156113s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:40.906296  323135 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:40.908135  323135 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-726261" cluster and "default" namespace by default
	W1123 08:45:36.537535  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:38.537920  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:40.537903  323816 pod_ready.go:94] pod "coredns-66bc5c9577-khlrk" is "Ready"
	I1123 08:45:40.537927  323816 pod_ready.go:86] duration metric: took 38.004948026s for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.540197  323816 pod_ready.go:83] waiting for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.543594  323816 pod_ready.go:94] pod "etcd-no-preload-187607" is "Ready"
	I1123 08:45:40.543613  323816 pod_ready.go:86] duration metric: took 3.39504ms for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.545430  323816 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.548523  323816 pod_ready.go:94] pod "kube-apiserver-no-preload-187607" is "Ready"
	I1123 08:45:40.548540  323816 pod_ready.go:86] duration metric: took 3.086438ms for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.550144  323816 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.736784  323816 pod_ready.go:94] pod "kube-controller-manager-no-preload-187607" is "Ready"
	I1123 08:45:40.736810  323816 pod_ready.go:86] duration metric: took 186.650289ms for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.936965  323816 pod_ready.go:83] waiting for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:38.095893  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:40.595721  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	I1123 08:45:41.336483  323816 pod_ready.go:94] pod "kube-proxy-f9d8j" is "Ready"
	I1123 08:45:41.336508  323816 pod_ready.go:86] duration metric: took 399.518187ms for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.536451  323816 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.936068  323816 pod_ready.go:94] pod "kube-scheduler-no-preload-187607" is "Ready"
	I1123 08:45:41.936095  323816 pod_ready.go:86] duration metric: took 399.617585ms for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.936110  323816 pod_ready.go:40] duration metric: took 39.406642608s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:41.977753  323816 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:41.979147  323816 out.go:179] * Done! kubectl is now configured to use "no-preload-187607" cluster and "default" namespace by default
	I1123 08:45:43.095643  329090 node_ready.go:49] node "embed-certs-756339" is "Ready"
	I1123 08:45:43.095676  329090 node_ready.go:38] duration metric: took 11.503297149s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:45:43.095722  329090 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:45:43.095787  329090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:45:43.107848  329090 api_server.go:72] duration metric: took 11.811934824s to wait for apiserver process to appear ...
	I1123 08:45:43.107869  329090 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:43.107884  329090 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:45:43.112629  329090 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:45:43.113413  329090 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:43.113433  329090 api_server.go:131] duration metric: took 5.559653ms to wait for apiserver health ...
	I1123 08:45:43.113441  329090 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:43.116485  329090 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:43.116510  329090 system_pods.go:61] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.116515  329090 system_pods.go:61] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.116520  329090 system_pods.go:61] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.116525  329090 system_pods.go:61] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.116532  329090 system_pods.go:61] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.116536  329090 system_pods.go:61] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.116539  329090 system_pods.go:61] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.116545  329090 system_pods.go:61] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.116550  329090 system_pods.go:74] duration metric: took 3.105251ms to wait for pod list to return data ...
	I1123 08:45:43.116558  329090 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:43.118523  329090 default_sa.go:45] found service account: "default"
	I1123 08:45:43.118538  329090 default_sa.go:55] duration metric: took 1.974886ms for default service account to be created ...
	I1123 08:45:43.118545  329090 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:43.120780  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.120802  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.120810  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.120815  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.120819  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.120826  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.120831  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.120834  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.120839  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.120863  329090 retry.go:31] will retry after 215.602357ms: missing components: kube-dns
	I1123 08:45:43.340425  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.340455  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.340462  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.340467  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.340472  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.340477  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.340480  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.340483  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.340488  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.340504  329090 retry.go:31] will retry after 325.287893ms: missing components: kube-dns
	I1123 08:45:43.668913  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.668952  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.668962  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.668971  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.668977  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.668983  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.668987  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.668993  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.669002  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.669025  329090 retry.go:31] will retry after 462.937798ms: missing components: kube-dns
	I1123 08:45:44.135919  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:44.135950  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Running
	I1123 08:45:44.135957  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:44.135962  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:44.135967  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:44.135972  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:44.135977  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:44.135983  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:44.135988  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Running
	I1123 08:45:44.135997  329090 system_pods.go:126] duration metric: took 1.017446384s to wait for k8s-apps to be running ...
	I1123 08:45:44.136008  329090 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:44.136053  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:44.148387  329090 system_svc.go:56] duration metric: took 12.375192ms WaitForService to wait for kubelet
	I1123 08:45:44.148408  329090 kubeadm.go:587] duration metric: took 12.85249816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:44.148426  329090 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:44.150884  329090 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:44.150906  329090 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:44.150923  329090 node_conditions.go:105] duration metric: took 2.493335ms to run NodePressure ...
	I1123 08:45:44.150933  329090 start.go:242] waiting for startup goroutines ...
	I1123 08:45:44.150943  329090 start.go:247] waiting for cluster config update ...
	I1123 08:45:44.150953  329090 start.go:256] writing updated cluster config ...
	I1123 08:45:44.151188  329090 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:44.154964  329090 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:44.158442  329090 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.162122  329090 pod_ready.go:94] pod "coredns-66bc5c9577-ffmn2" is "Ready"
	I1123 08:45:44.162139  329090 pod_ready.go:86] duration metric: took 3.680173ms for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.163781  329090 pod_ready.go:83] waiting for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.167030  329090 pod_ready.go:94] pod "etcd-embed-certs-756339" is "Ready"
	I1123 08:45:44.167046  329090 pod_ready.go:86] duration metric: took 3.249458ms for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.168620  329090 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.171889  329090 pod_ready.go:94] pod "kube-apiserver-embed-certs-756339" is "Ready"
	I1123 08:45:44.171905  329090 pod_ready.go:86] duration metric: took 3.265991ms for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.173681  329090 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.558804  329090 pod_ready.go:94] pod "kube-controller-manager-embed-certs-756339" is "Ready"
	I1123 08:45:44.558838  329090 pod_ready.go:86] duration metric: took 385.124392ms for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.759793  329090 pod_ready.go:83] waiting for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.158864  329090 pod_ready.go:94] pod "kube-proxy-npnsh" is "Ready"
	I1123 08:45:45.158887  329090 pod_ready.go:86] duration metric: took 399.071703ms for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.360200  329090 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.758770  329090 pod_ready.go:94] pod "kube-scheduler-embed-certs-756339" is "Ready"
	I1123 08:45:45.758800  329090 pod_ready.go:86] duration metric: took 398.571969ms for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.758811  329090 pod_ready.go:40] duration metric: took 1.603821403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:45.800049  329090 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:45.802064  329090 out.go:179] * Done! kubectl is now configured to use "embed-certs-756339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:45:14 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:14.338614737Z" level=info msg="Started container" PID=1713 containerID=db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper id=002c8ea3-1a77-41b4-92c3-b10ce536bbb1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d5d3c567bada8d6c478ee2a6cf0140223d1118fc9328226eb923517a2bcd256c
	Nov 23 08:45:15 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:15.194476038Z" level=info msg="Removing container: 3c2b322e336bc6a6467ebdd9dc0c5441806dd7af030c146c481cf4ed0ad46183" id=b60fe394-0153-48e7-9e48-7b960f0e5305 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:15 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:15.20548823Z" level=info msg="Removed container 3c2b322e336bc6a6467ebdd9dc0c5441806dd7af030c146c481cf4ed0ad46183: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper" id=b60fe394-0153-48e7-9e48-7b960f0e5305 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.241677715Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=768ec900-ecfa-4ff0-a0f1-700de7911b4d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.242653127Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8db9ad94-1add-4cea-a634-f1fdc564930c name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.24376899Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ec994658-c0cb-415c-ae94-a5ebeee151e8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.243892749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.248468834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.248660501Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4da7f54502e3b070fcef4e49f3e6ef4bf27b2f927b8d501944ddf60e77a4d237/merged/etc/passwd: no such file or directory"
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.248712242Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4da7f54502e3b070fcef4e49f3e6ef4bf27b2f927b8d501944ddf60e77a4d237/merged/etc/group: no such file or directory"
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.248996599Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.276001089Z" level=info msg="Created container 4df2466c9bde6f0fd82a87d84de8b6e968bf33006410e1028c82880ce2aa8c70: kube-system/storage-provisioner/storage-provisioner" id=ec994658-c0cb-415c-ae94-a5ebeee151e8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.276432507Z" level=info msg="Starting container: 4df2466c9bde6f0fd82a87d84de8b6e968bf33006410e1028c82880ce2aa8c70" id=26d0eb19-eeaa-4c14-bc17-bb90ccf162b7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:32 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:32.278190339Z" level=info msg="Started container" PID=1729 containerID=4df2466c9bde6f0fd82a87d84de8b6e968bf33006410e1028c82880ce2aa8c70 description=kube-system/storage-provisioner/storage-provisioner id=26d0eb19-eeaa-4c14-bc17-bb90ccf162b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=404017157e4ba1ec9dc9b9d6188448955d3bf6d1e0b8df7faa11db0a7e03e767
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.098830563Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=11fb38a1-c0cb-4c52-bafc-9184f9d3880b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.099832874Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5c436a11-a135-4c0d-b5b6-3a28b2eccc8f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.100872094Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper" id=4135224e-5e73-4e3f-acc4-03d744269689 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.100989249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.106316175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.106935811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.135055604Z" level=info msg="Created container 03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper" id=4135224e-5e73-4e3f-acc4-03d744269689 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.135551706Z" level=info msg="Starting container: 03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a" id=ac965c79-82f9-4e42-b0d0-ed0b70d3652a name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.137350715Z" level=info msg="Started container" PID=1745 containerID=03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper id=ac965c79-82f9-4e42-b0d0-ed0b70d3652a name=/runtime.v1.RuntimeService/StartContainer sandboxID=d5d3c567bada8d6c478ee2a6cf0140223d1118fc9328226eb923517a2bcd256c
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.248989455Z" level=info msg="Removing container: db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221" id=8baa5ebf-436a-4497-b24d-01ca2722ef30 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:33 default-k8s-diff-port-726261 crio[562]: time="2025-11-23T08:45:33.257361638Z" level=info msg="Removed container db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk/dashboard-metrics-scraper" id=8baa5ebf-436a-4497-b24d-01ca2722ef30 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	03ce7818d9d50       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   d5d3c567bada8       dashboard-metrics-scraper-6ffb444bf9-tb8zk             kubernetes-dashboard
	4df2466c9bde6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   404017157e4ba       storage-provisioner                                    kube-system
	3def24cbd2530       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   519664773f608       kubernetes-dashboard-855c9754f9-fnxnm                  kubernetes-dashboard
	7ef10f196889c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   bccfb074b8fd4       coredns-66bc5c9577-8f8f5                               kube-system
	1da1b5290a3ae       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   3d83cd81ace8c       kindnet-4zwgv                                          kube-system
	07d7e34fb9e54       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   1ba72a1606cf3       busybox                                                default
	5570992f3d35e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   675782d12a9c4       kube-proxy-sn4sp                                       kube-system
	ccbc32cc46374       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   404017157e4ba       storage-provisioner                                    kube-system
	b50ab2696e1e6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   11d26d01fb374       kube-apiserver-default-k8s-diff-port-726261            kube-system
	246dd1e6858bd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   52b4d220d3516       etcd-default-k8s-diff-port-726261                      kube-system
	dc3d29d35b622       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   6e6bd8b7953e9       kube-controller-manager-default-k8s-diff-port-726261   kube-system
	460c9f8d4b064       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   92bffd075a7ee       kube-scheduler-default-k8s-diff-port-726261            kube-system
	
	
	==> coredns [7ef10f196889c9d3ac8c5dd4fd68549002b4069ef13629eda75f63e301942ebf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46410 - 30024 "HINFO IN 8990126013200876076.5813043168634314061. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054902659s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-726261
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-726261
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=default-k8s-diff-port-726261
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:43:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-726261
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:45:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:45:51 +0000   Sun, 23 Nov 2025 08:43:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:45:51 +0000   Sun, 23 Nov 2025 08:43:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:45:51 +0000   Sun, 23 Nov 2025 08:43:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:45:51 +0000   Sun, 23 Nov 2025 08:44:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-726261
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                72a55ebb-5247-4a4a-aaf5-7a6c6d5788f6
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-8f8f5                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-726261                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-4zwgv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-726261             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-726261    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-sn4sp                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-726261             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-tb8zk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fnxnm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node default-k8s-diff-port-726261 event: Registered Node default-k8s-diff-port-726261 in Controller
	  Normal  NodeReady                99s                kubelet          Node default-k8s-diff-port-726261 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-726261 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node default-k8s-diff-port-726261 event: Registered Node default-k8s-diff-port-726261 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [246dd1e6858bda7bf5fe65c5645c08c92b5d0ed231ce7b5b4abd97fac4802f8f] <==
	{"level":"warn","ts":"2025-11-23T08:44:59.754774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.763073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.779279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.793162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.803673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.812835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.823506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.835881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.845085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.856357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.871308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.879916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.889350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.905141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.918361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.930164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.950614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.960213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.969958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.977044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:44:59.996968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:00.004882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:00.015120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:00.096132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53924","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:45:06.988392Z","caller":"traceutil/trace.go:172","msg":"trace[1619490187] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"164.870295ms","start":"2025-11-23T08:45:06.823498Z","end":"2025-11-23T08:45:06.988368Z","steps":["trace[1619490187] 'process raft request'  (duration: 126.841847ms)","trace[1619490187] 'compare'  (duration: 37.650206ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:45:57 up  1:28,  0 user,  load average: 3.69, 3.74, 2.45
	Linux default-k8s-diff-port-726261 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1da1b5290a3ae07eb95a2f27c1a53ff4324a0646fd34b71680617d7f9aaaf8fb] <==
	I1123 08:45:01.792024       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:45:01.792334       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:45:01.792518       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:45:01.792535       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:45:01.792560       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:45:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:45:01.997378       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:45:01.997426       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:45:01.997439       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:45:01.997572       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:45:02.397809       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:45:02.397829       1 metrics.go:72] Registering metrics
	I1123 08:45:02.397881       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:11.998032       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:11.999041       1 main.go:301] handling current node
	I1123 08:45:22.000158       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:22.000234       1 main.go:301] handling current node
	I1123 08:45:31.997833       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:31.997875       1 main.go:301] handling current node
	I1123 08:45:41.998172       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:41.998332       1 main.go:301] handling current node
	I1123 08:45:51.997549       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:45:51.997588       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b50ab2696e1e67f0ac0d0181ec5963bc5d4a3d2b32af3b3f35daa7d47a58c5e9] <==
	I1123 08:45:00.681620       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:45:00.681625       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:45:00.681631       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:45:00.681891       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 08:45:00.681932       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:45:00.683278       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:45:00.689316       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 08:45:00.689368       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:45:00.690147       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 08:45:00.690163       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 08:45:00.693117       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 08:45:00.693149       1 policy_source.go:240] refreshing policies
	I1123 08:45:00.707581       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:45:00.722749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:45:01.050767       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:45:01.135162       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:45:01.156676       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:45:01.167846       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:45:01.175235       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:45:01.307287       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.219.111"}
	I1123 08:45:01.328317       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.247.40"}
	I1123 08:45:01.594001       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:45:04.269288       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:45:04.469968       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:45:04.667802       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [dc3d29d35b622d6b93507aea04eac9baab619145bad0ffc805501a72a5c213eb] <==
	I1123 08:45:04.066710       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:45:04.066751       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 08:45:04.066771       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:45:04.066640       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:45:04.066756       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:45:04.066807       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:45:04.066886       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:45:04.066990       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-726261"
	I1123 08:45:04.067083       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 08:45:04.067781       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:45:04.072047       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:45:04.072162       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:04.074296       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:04.079436       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:04.079451       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:45:04.079459       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:45:04.084274       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 08:45:04.087531       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:45:04.088749       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:04.091884       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:45:04.095190       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:45:04.097548       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:45:04.098836       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:45:04.116234       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:45:04.116262       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [5570992f3d35e1c9011ec15df9afc2ce9eba453be9b90083b5f4b396eab5dd4e] <==
	I1123 08:45:01.566671       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:45:01.629987       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:45:01.730482       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:45:01.730602       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 08:45:01.730882       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:45:01.757187       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:45:01.757252       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:45:01.765129       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:45:01.766547       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:45:01.766633       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:01.771454       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:45:01.773063       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:45:01.771959       1 config.go:200] "Starting service config controller"
	I1123 08:45:01.773458       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:45:01.771981       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:45:01.773707       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:45:01.773950       1 config.go:309] "Starting node config controller"
	I1123 08:45:01.774811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:45:01.774828       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:45:01.873545       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:45:01.873580       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:45:01.874731       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [460c9f8d4b0648fd809721216954ce6522f68a315fdb7c8fbacbaeb8288f1ffb] <==
	I1123 08:44:58.300301       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:45:00.701995       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:45:00.702085       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:00.710793       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:45:00.710833       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:45:00.710913       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:00.710924       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:00.710942       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:45:00.710953       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:45:00.711293       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:45:00.711654       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:45:00.811442       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 08:45:00.811482       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:00.811445       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:45:04 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:04.754625     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nbrq\" (UniqueName: \"kubernetes.io/projected/01fb6bc8-9147-4bd4-8515-54325b5f4163-kube-api-access-9nbrq\") pod \"kubernetes-dashboard-855c9754f9-fnxnm\" (UID: \"01fb6bc8-9147-4bd4-8515-54325b5f4163\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fnxnm"
	Nov 23 08:45:04 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:04.754679     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56kmw\" (UniqueName: \"kubernetes.io/projected/f35100bc-3f6c-4d3c-8cac-05619bb18cc5-kube-api-access-56kmw\") pod \"dashboard-metrics-scraper-6ffb444bf9-tb8zk\" (UID: \"f35100bc-3f6c-4d3c-8cac-05619bb18cc5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk"
	Nov 23 08:45:04 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:04.754724     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f35100bc-3f6c-4d3c-8cac-05619bb18cc5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-tb8zk\" (UID: \"f35100bc-3f6c-4d3c-8cac-05619bb18cc5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk"
	Nov 23 08:45:04 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:04.754875     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/01fb6bc8-9147-4bd4-8515-54325b5f4163-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fnxnm\" (UID: \"01fb6bc8-9147-4bd4-8515-54325b5f4163\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fnxnm"
	Nov 23 08:45:09 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:09.219977     715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 08:45:11 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:11.281147     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fnxnm" podStartSLOduration=1.869320657 podStartE2EDuration="7.28112092s" podCreationTimestamp="2025-11-23 08:45:04 +0000 UTC" firstStartedPulling="2025-11-23 08:45:05.233719919 +0000 UTC m=+8.257286568" lastFinishedPulling="2025-11-23 08:45:10.645520171 +0000 UTC m=+13.669086831" observedRunningTime="2025-11-23 08:45:11.202475341 +0000 UTC m=+14.226042010" watchObservedRunningTime="2025-11-23 08:45:11.28112092 +0000 UTC m=+14.304687589"
	Nov 23 08:45:14 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:14.188076     715 scope.go:117] "RemoveContainer" containerID="3c2b322e336bc6a6467ebdd9dc0c5441806dd7af030c146c481cf4ed0ad46183"
	Nov 23 08:45:15 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:15.193109     715 scope.go:117] "RemoveContainer" containerID="3c2b322e336bc6a6467ebdd9dc0c5441806dd7af030c146c481cf4ed0ad46183"
	Nov 23 08:45:15 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:15.193272     715 scope.go:117] "RemoveContainer" containerID="db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221"
	Nov 23 08:45:15 default-k8s-diff-port-726261 kubelet[715]: E1123 08:45:15.193492     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tb8zk_kubernetes-dashboard(f35100bc-3f6c-4d3c-8cac-05619bb18cc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk" podUID="f35100bc-3f6c-4d3c-8cac-05619bb18cc5"
	Nov 23 08:45:16 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:16.198602     715 scope.go:117] "RemoveContainer" containerID="db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221"
	Nov 23 08:45:16 default-k8s-diff-port-726261 kubelet[715]: E1123 08:45:16.198833     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tb8zk_kubernetes-dashboard(f35100bc-3f6c-4d3c-8cac-05619bb18cc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk" podUID="f35100bc-3f6c-4d3c-8cac-05619bb18cc5"
	Nov 23 08:45:22 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:22.315100     715 scope.go:117] "RemoveContainer" containerID="db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221"
	Nov 23 08:45:22 default-k8s-diff-port-726261 kubelet[715]: E1123 08:45:22.315301     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tb8zk_kubernetes-dashboard(f35100bc-3f6c-4d3c-8cac-05619bb18cc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk" podUID="f35100bc-3f6c-4d3c-8cac-05619bb18cc5"
	Nov 23 08:45:32 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:32.241340     715 scope.go:117] "RemoveContainer" containerID="ccbc32cc46374e63b5543296cbf640d9549909d2c0eceece3434d9968f9a5845"
	Nov 23 08:45:33 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:33.098340     715 scope.go:117] "RemoveContainer" containerID="db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221"
	Nov 23 08:45:33 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:33.247672     715 scope.go:117] "RemoveContainer" containerID="db1b605919e37b0001c7f156d001ac6a3a8e46cf774a13b3e76ad97640a00221"
	Nov 23 08:45:33 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:33.247880     715 scope.go:117] "RemoveContainer" containerID="03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a"
	Nov 23 08:45:33 default-k8s-diff-port-726261 kubelet[715]: E1123 08:45:33.248062     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tb8zk_kubernetes-dashboard(f35100bc-3f6c-4d3c-8cac-05619bb18cc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk" podUID="f35100bc-3f6c-4d3c-8cac-05619bb18cc5"
	Nov 23 08:45:42 default-k8s-diff-port-726261 kubelet[715]: I1123 08:45:42.314586     715 scope.go:117] "RemoveContainer" containerID="03ce7818d9d5021bea18fa1d33a9bd4a0b9916920c6b7ec5ee788fbf153cdd9a"
	Nov 23 08:45:42 default-k8s-diff-port-726261 kubelet[715]: E1123 08:45:42.314848     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-tb8zk_kubernetes-dashboard(f35100bc-3f6c-4d3c-8cac-05619bb18cc5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-tb8zk" podUID="f35100bc-3f6c-4d3c-8cac-05619bb18cc5"
	Nov 23 08:45:52 default-k8s-diff-port-726261 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:45:53 default-k8s-diff-port-726261 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:45:53 default-k8s-diff-port-726261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 08:45:53 default-k8s-diff-port-726261 systemd[1]: kubelet.service: Consumed 1.663s CPU time.
	
	
	==> kubernetes-dashboard [3def24cbd2530ecf5a755e903ca77a95246cde5cba1a19043250f071201a1518] <==
	2025/11/23 08:45:10 Using namespace: kubernetes-dashboard
	2025/11/23 08:45:10 Using in-cluster config to connect to apiserver
	2025/11/23 08:45:10 Using secret token for csrf signing
	2025/11/23 08:45:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:45:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:45:10 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:45:10 Generating JWE encryption key
	2025/11/23 08:45:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:45:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:45:10 Initializing JWE encryption key from synchronized object
	2025/11/23 08:45:10 Creating in-cluster Sidecar client
	2025/11/23 08:45:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:45:10 Serving insecurely on HTTP port: 9090
	2025/11/23 08:45:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:45:10 Starting overwatch
	
	
	==> storage-provisioner [4df2466c9bde6f0fd82a87d84de8b6e968bf33006410e1028c82880ce2aa8c70] <==
	I1123 08:45:32.290047       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:45:32.296634       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:45:32.296671       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:32.298450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:35.752621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:40.012853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:43.610484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:46.664184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:49.685848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:49.690826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:49.690963       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:49.691033       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1459ed91-8156-4cda-ba23-7e39e4104244", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-726261_2837eaf0-83d0-4b26-844c-6b7636ea4d52 became leader
	I1123 08:45:49.691112       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-726261_2837eaf0-83d0-4b26-844c-6b7636ea4d52!
	W1123 08:45:49.692749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:49.696065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:49.791302       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-726261_2837eaf0-83d0-4b26-844c-6b7636ea4d52!
	W1123 08:45:51.698396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:51.703041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.706659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.711254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:55.715214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:55.719863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:57.723967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:57.728482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ccbc32cc46374e63b5543296cbf640d9549909d2c0eceece3434d9968f9a5845] <==
	I1123 08:45:01.486423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:45:31.489228       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261: exit status 2 (347.027768ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-726261 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-187607 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-187607 --alsologtostderr -v=1: exit status 80 (1.60812072s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-187607 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:45:53.689096  336008 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:53.689341  336008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:53.689348  336008 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:53.689352  336008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:53.689559  336008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:45:53.689803  336008 out.go:368] Setting JSON to false
	I1123 08:45:53.689824  336008 mustload.go:66] Loading cluster: no-preload-187607
	I1123 08:45:53.690175  336008 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:53.690528  336008 cli_runner.go:164] Run: docker container inspect no-preload-187607 --format={{.State.Status}}
	I1123 08:45:53.708268  336008 host.go:66] Checking if "no-preload-187607" exists ...
	I1123 08:45:53.708505  336008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:53.765082  336008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:45:53.756016401 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:53.765777  336008 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-187607 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 08:45:53.767389  336008 out.go:179] * Pausing node no-preload-187607 ... 
	I1123 08:45:53.768404  336008 host.go:66] Checking if "no-preload-187607" exists ...
	I1123 08:45:53.768634  336008 ssh_runner.go:195] Run: systemctl --version
	I1123 08:45:53.768672  336008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-187607
	I1123 08:45:53.785259  336008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/no-preload-187607/id_rsa Username:docker}
	I1123 08:45:53.883829  336008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:53.909731  336008 pause.go:52] kubelet running: true
	I1123 08:45:53.909798  336008 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:45:54.091374  336008 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:45:54.091457  336008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:45:54.160937  336008 cri.go:89] found id: "c6c270dccd32c502da3fafcf547f6f6714b0f3418167733e063f8a10411f3214"
	I1123 08:45:54.160963  336008 cri.go:89] found id: "0f65ae30d25f0e6796dc383f8f723ee3a043d903cea5be75fb9ad29429a39fa0"
	I1123 08:45:54.160970  336008 cri.go:89] found id: "c7d79d91171ad2356ff4429be5853d33c2d0b45d87251302f6d1b783580ef9ee"
	I1123 08:45:54.160976  336008 cri.go:89] found id: "3c52daba0a02a5f43db9a936c7bee455eaed07b8846c57f2a36e9d42a2c662b1"
	I1123 08:45:54.160981  336008 cri.go:89] found id: "4aa18c92f3f578f172c0e283a0c69d67753703f1ad1da5f13d492a4f417e49f1"
	I1123 08:45:54.160987  336008 cri.go:89] found id: "82c67fc0d0d50ab08e241d39a2087b1c3e8bc3f645f3bfdeeb79a7ab0f98af22"
	I1123 08:45:54.160991  336008 cri.go:89] found id: "58bccd8b525725bf0e119a031f7704340d4a582f1f9d22e35700e56c5414fc15"
	I1123 08:45:54.160996  336008 cri.go:89] found id: "f7dc3b2c3eb35a85ed7f46e5a51507d750e9e62d6d4e5f5d8cf809a595a3fbb5"
	I1123 08:45:54.161000  336008 cri.go:89] found id: "f9c1a46853ec5ff3a03c27f07d016527c9affe0091ecc22c9627ff73f8705db1"
	I1123 08:45:54.161025  336008 cri.go:89] found id: "9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f"
	I1123 08:45:54.161034  336008 cri.go:89] found id: "f6600a361a3baa6724f669b340ef4e64b2062295514dc30b0ed6e119477cc6b2"
	I1123 08:45:54.161039  336008 cri.go:89] found id: ""
	I1123 08:45:54.161096  336008 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:45:54.172750  336008 retry.go:31] will retry after 174.456074ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:54Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:45:54.348214  336008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:54.364028  336008 pause.go:52] kubelet running: false
	I1123 08:45:54.364082  336008 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:45:54.530748  336008 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:45:54.530823  336008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:45:54.620271  336008 cri.go:89] found id: "c6c270dccd32c502da3fafcf547f6f6714b0f3418167733e063f8a10411f3214"
	I1123 08:45:54.620294  336008 cri.go:89] found id: "0f65ae30d25f0e6796dc383f8f723ee3a043d903cea5be75fb9ad29429a39fa0"
	I1123 08:45:54.620300  336008 cri.go:89] found id: "c7d79d91171ad2356ff4429be5853d33c2d0b45d87251302f6d1b783580ef9ee"
	I1123 08:45:54.620313  336008 cri.go:89] found id: "3c52daba0a02a5f43db9a936c7bee455eaed07b8846c57f2a36e9d42a2c662b1"
	I1123 08:45:54.620317  336008 cri.go:89] found id: "4aa18c92f3f578f172c0e283a0c69d67753703f1ad1da5f13d492a4f417e49f1"
	I1123 08:45:54.620322  336008 cri.go:89] found id: "82c67fc0d0d50ab08e241d39a2087b1c3e8bc3f645f3bfdeeb79a7ab0f98af22"
	I1123 08:45:54.620326  336008 cri.go:89] found id: "58bccd8b525725bf0e119a031f7704340d4a582f1f9d22e35700e56c5414fc15"
	I1123 08:45:54.620331  336008 cri.go:89] found id: "f7dc3b2c3eb35a85ed7f46e5a51507d750e9e62d6d4e5f5d8cf809a595a3fbb5"
	I1123 08:45:54.620335  336008 cri.go:89] found id: "f9c1a46853ec5ff3a03c27f07d016527c9affe0091ecc22c9627ff73f8705db1"
	I1123 08:45:54.620344  336008 cri.go:89] found id: "9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f"
	I1123 08:45:54.620353  336008 cri.go:89] found id: "f6600a361a3baa6724f669b340ef4e64b2062295514dc30b0ed6e119477cc6b2"
	I1123 08:45:54.620357  336008 cri.go:89] found id: ""
	I1123 08:45:54.620404  336008 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:45:54.633961  336008 retry.go:31] will retry after 315.477844ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:54Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:45:54.950480  336008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:54.963912  336008 pause.go:52] kubelet running: false
	I1123 08:45:54.963962  336008 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:45:55.136739  336008 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:45:55.136805  336008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:45:55.206795  336008 cri.go:89] found id: "c6c270dccd32c502da3fafcf547f6f6714b0f3418167733e063f8a10411f3214"
	I1123 08:45:55.206817  336008 cri.go:89] found id: "0f65ae30d25f0e6796dc383f8f723ee3a043d903cea5be75fb9ad29429a39fa0"
	I1123 08:45:55.206823  336008 cri.go:89] found id: "c7d79d91171ad2356ff4429be5853d33c2d0b45d87251302f6d1b783580ef9ee"
	I1123 08:45:55.206829  336008 cri.go:89] found id: "3c52daba0a02a5f43db9a936c7bee455eaed07b8846c57f2a36e9d42a2c662b1"
	I1123 08:45:55.206833  336008 cri.go:89] found id: "4aa18c92f3f578f172c0e283a0c69d67753703f1ad1da5f13d492a4f417e49f1"
	I1123 08:45:55.206838  336008 cri.go:89] found id: "82c67fc0d0d50ab08e241d39a2087b1c3e8bc3f645f3bfdeeb79a7ab0f98af22"
	I1123 08:45:55.206843  336008 cri.go:89] found id: "58bccd8b525725bf0e119a031f7704340d4a582f1f9d22e35700e56c5414fc15"
	I1123 08:45:55.206848  336008 cri.go:89] found id: "f7dc3b2c3eb35a85ed7f46e5a51507d750e9e62d6d4e5f5d8cf809a595a3fbb5"
	I1123 08:45:55.206853  336008 cri.go:89] found id: "f9c1a46853ec5ff3a03c27f07d016527c9affe0091ecc22c9627ff73f8705db1"
	I1123 08:45:55.206861  336008 cri.go:89] found id: "9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f"
	I1123 08:45:55.206878  336008 cri.go:89] found id: "f6600a361a3baa6724f669b340ef4e64b2062295514dc30b0ed6e119477cc6b2"
	I1123 08:45:55.206887  336008 cri.go:89] found id: ""
	I1123 08:45:55.206933  336008 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:45:55.222356  336008 out.go:203] 
	W1123 08:45:55.223746  336008 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:45:55.223765  336008 out.go:285] * 
	* 
	W1123 08:45:55.229265  336008 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:45:55.230528  336008 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-187607 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-187607
helpers_test.go:243: (dbg) docker inspect no-preload-187607:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469",
	        "Created": "2025-11-23T08:43:30.899099908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324268,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:44:51.556025805Z",
	            "FinishedAt": "2025-11-23T08:44:50.55082253Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/hostname",
	        "HostsPath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/hosts",
	        "LogPath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469-json.log",
	        "Name": "/no-preload-187607",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-187607:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-187607",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469",
	                "LowerDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-187607",
	                "Source": "/var/lib/docker/volumes/no-preload-187607/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-187607",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-187607",
	                "name.minikube.sigs.k8s.io": "no-preload-187607",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "775f86894233b2b6953d6ad591546cb31e3c92d7471bc4799925a827744a3864",
	            "SandboxKey": "/var/run/docker/netns/775f86894233",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-187607": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e4a86ee726dad104f8707d936e5a79c6311cee3cba1074fc9a2490264915ec02",
	                    "EndpointID": "83264f55f1d42ad22f0f00e032ea609dea0e50cfe3c86a0890d9338bd54ea909",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ca:37:fa:5a:42:64",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-187607",
	                        "c79339fc6cb1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-187607 -n no-preload-187607
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-187607 -n no-preload-187607: exit status 2 (354.389969ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-187607 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-187607 logs -n 25: (1.22580136s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-726261 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p no-preload-187607 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p newest-cni-653361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-726261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ addons  │ enable dashboard -p no-preload-187607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ newest-cni-653361 image list --format=json                                                                                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ pause   │ -p newest-cni-653361 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-177890                                                                                                                                                                                                               │ disable-driver-mounts-177890 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-057894 image list --format=json                                                                                                                                                                                               │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-057894 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ delete  │ -p old-k8s-version-057894                                                                                                                                                                                                                     │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-057894                                                                                                                                                                                                                     │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ default-k8s-diff-port-726261 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p default-k8s-diff-port-726261 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ image   │ no-preload-187607 image list --format=json                                                                                                                                                                                                    │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p no-preload-187607 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-756339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:45:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:45:01.745123  329090 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:01.745432  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745440  329090 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:01.745446  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745739  329090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:45:01.746375  329090 out.go:368] Setting JSON to false
	I1123 08:45:01.748064  329090 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5249,"bootTime":1763882253,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:45:01.748157  329090 start.go:143] virtualization: kvm guest
	I1123 08:45:01.750156  329090 out.go:179] * [embed-certs-756339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:45:01.753393  329090 notify.go:221] Checking for updates...
	I1123 08:45:01.753398  329090 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:45:01.755146  329090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:45:01.756598  329090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:01.757836  329090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:45:01.758954  329090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:45:01.760360  329090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:45:01.765276  329090 config.go:182] Loaded profile config "default-k8s-diff-port-726261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765522  329090 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765681  329090 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:45:01.765827  329090 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:45:01.800644  329090 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:45:01.801313  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.871017  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.860213573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.871190  329090 docker.go:319] overlay module found
	I1123 08:45:01.872879  329090 out.go:179] * Using the docker driver based on user configuration
	I1123 08:45:01.874146  329090 start.go:309] selected driver: docker
	I1123 08:45:01.874172  329090 start.go:927] validating driver "docker" against <nil>
	I1123 08:45:01.874185  329090 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:45:01.874731  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.950283  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.938442114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.950526  329090 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:45:01.950805  329090 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.952251  329090 out.go:179] * Using Docker driver with root privileges
	I1123 08:45:01.953421  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:01.953493  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:01.953508  329090 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:45:01.953584  329090 start.go:353] cluster config:
	{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:01.954827  329090 out.go:179] * Starting "embed-certs-756339" primary control-plane node in "embed-certs-756339" cluster
	I1123 08:45:01.955848  329090 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:45:01.957107  329090 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:45:01.958365  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:01.958393  329090 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:45:01.958408  329090 cache.go:65] Caching tarball of preloaded images
	I1123 08:45:01.958465  329090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:45:01.958507  329090 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:45:01.958523  329090 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:45:01.958635  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:01.958661  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json: {Name:mk2bf238bbe57398e8f0e67e0ff345b4c996e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:01.983475  329090 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:45:01.983497  329090 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:45:01.983513  329090 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:45:01.983540  329090 start.go:360] acquireMachinesLock for embed-certs-756339: {Name:mk2607c5ea38ca6bd330e0a548b36202f67f84a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:01.983642  329090 start.go:364] duration metric: took 84.653µs to acquireMachinesLock for "embed-certs-756339"
	I1123 08:45:01.983672  329090 start.go:93] Provisioning new machine with config: &{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:01.983792  329090 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:45:01.986901  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.692445857s)
	I1123 08:45:01.987002  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.670756175s)
	I1123 08:45:01.987136  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.507320621s)
	I1123 08:45:01.987186  323816 api_server.go:72] duration metric: took 2.902108336s to wait for apiserver process to appear ...
	I1123 08:45:01.987204  323816 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:01.987282  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:01.988808  323816 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-187607 addons enable metrics-server
	
	I1123 08:45:01.992707  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:45:01.992732  323816 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:45:01.994529  323816 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 08:45:01.422757  323135 addons.go:530] duration metric: took 3.555416147s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:01.910007  323135 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:45:01.915784  323135 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:45:01.917062  323135 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:01.917089  323135 api_server.go:131] duration metric: took 507.92158ms to wait for apiserver health ...
	I1123 08:45:01.917100  323135 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:01.921785  323135 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:01.921998  323135 system_pods.go:61] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.922039  323135 system_pods.go:61] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.922068  323135 system_pods.go:61] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.922079  323135 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.922087  323135 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.922095  323135 system_pods.go:61] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.922107  323135 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.922115  323135 system_pods.go:61] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.922124  323135 system_pods.go:74] duration metric: took 5.016936ms to wait for pod list to return data ...
	I1123 08:45:01.922189  323135 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:01.925409  323135 default_sa.go:45] found service account: "default"
	I1123 08:45:01.925452  323135 default_sa.go:55] duration metric: took 3.245595ms for default service account to be created ...
	I1123 08:45:01.925463  323135 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:01.931804  323135 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:01.931872  323135 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.931898  323135 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.931961  323135 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.931995  323135 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.932018  323135 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.932037  323135 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.932066  323135 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.932076  323135 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.932086  323135 system_pods.go:126] duration metric: took 6.61665ms to wait for k8s-apps to be running ...
	I1123 08:45:01.932097  323135 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:01.932143  323135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:01.947263  323135 system_svc.go:56] duration metric: took 15.160659ms WaitForService to wait for kubelet
	I1123 08:45:01.947298  323135 kubeadm.go:587] duration metric: took 4.08017724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.947325  323135 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:01.950481  323135 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:01.950509  323135 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:01.950526  323135 node_conditions.go:105] duration metric: took 3.194245ms to run NodePressure ...
	I1123 08:45:01.950541  323135 start.go:242] waiting for startup goroutines ...
	I1123 08:45:01.950555  323135 start.go:247] waiting for cluster config update ...
	I1123 08:45:01.950571  323135 start.go:256] writing updated cluster config ...
	I1123 08:45:01.950876  323135 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:01.955038  323135 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:01.958449  323135 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:03.965246  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:01.995584  323816 addons.go:530] duration metric: took 2.910424664s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:02.487321  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:02.491678  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:45:02.492738  323816 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:02.492762  323816 api_server.go:131] duration metric: took 505.498506ms to wait for apiserver health ...
	I1123 08:45:02.492770  323816 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:02.496254  323816 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:02.496282  323816 system_pods.go:61] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.496290  323816 system_pods.go:61] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.496296  323816 system_pods.go:61] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.496302  323816 system_pods.go:61] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.496310  323816 system_pods.go:61] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.496317  323816 system_pods.go:61] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.496324  323816 system_pods.go:61] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.496334  323816 system_pods.go:61] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.496340  323816 system_pods.go:74] duration metric: took 3.565076ms to wait for pod list to return data ...
	I1123 08:45:02.496348  323816 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:02.498409  323816 default_sa.go:45] found service account: "default"
	I1123 08:45:02.498426  323816 default_sa.go:55] duration metric: took 2.073405ms for default service account to be created ...
	I1123 08:45:02.498434  323816 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:02.500853  323816 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.500888  323816 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.500899  323816 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.500912  323816 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.500929  323816 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.500941  323816 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.500951  323816 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.500961  323816 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.500971  323816 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.500978  323816 system_pods.go:126] duration metric: took 2.538671ms to wait for k8s-apps to be running ...
	I1123 08:45:02.500991  323816 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:02.501036  323816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:02.522199  323816 system_svc.go:56] duration metric: took 21.201972ms WaitForService to wait for kubelet
	I1123 08:45:02.522225  323816 kubeadm.go:587] duration metric: took 3.437147085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:02.522246  323816 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:02.524870  323816 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:02.524905  323816 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:02.524925  323816 node_conditions.go:105] duration metric: took 2.673388ms to run NodePressure ...
	I1123 08:45:02.524943  323816 start.go:242] waiting for startup goroutines ...
	I1123 08:45:02.524953  323816 start.go:247] waiting for cluster config update ...
	I1123 08:45:02.524970  323816 start.go:256] writing updated cluster config ...
	I1123 08:45:02.525241  323816 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:02.529440  323816 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:02.532956  323816 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:04.545550  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:01.985817  329090 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:45:01.986054  329090 start.go:159] libmachine.API.Create for "embed-certs-756339" (driver="docker")
	I1123 08:45:01.986094  329090 client.go:173] LocalClient.Create starting
	I1123 08:45:01.986158  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 08:45:01.986202  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986228  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986299  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 08:45:01.986331  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986349  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986747  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:45:02.006351  329090 cli_runner.go:211] docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:45:02.006428  329090 network_create.go:284] running [docker network inspect embed-certs-756339] to gather additional debugging logs...
	I1123 08:45:02.006453  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339
	W1123 08:45:02.024029  329090 cli_runner.go:211] docker network inspect embed-certs-756339 returned with exit code 1
	I1123 08:45:02.024056  329090 network_create.go:287] error running [docker network inspect embed-certs-756339]: docker network inspect embed-certs-756339: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-756339 not found
	I1123 08:45:02.024076  329090 network_create.go:289] output of [docker network inspect embed-certs-756339]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-756339 not found
	
	** /stderr **
	I1123 08:45:02.024188  329090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:02.041589  329090 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0e05b954e81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:02:f0:06:d5:34} reservation:<nil>}
	I1123 08:45:02.042147  329090 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2604e536ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:ab:00:4e:41:e6} reservation:<nil>}
	I1123 08:45:02.042884  329090 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce97320dd675 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:5a:a5:0b:c0:b0} reservation:<nil>}
	I1123 08:45:02.043340  329090 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c80b7bca17a7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:f1:41:59:09:b5} reservation:<nil>}
	I1123 08:45:02.043937  329090 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8e58961f3024 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b6:f0:e4:3c:63:d5} reservation:<nil>}
	I1123 08:45:02.044437  329090 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-e4a86ee726da IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ae:37:bc:fe:9d:3a} reservation:<nil>}
	I1123 08:45:02.045221  329090 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06cd0}
	I1123 08:45:02.045242  329090 network_create.go:124] attempt to create docker network embed-certs-756339 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1123 08:45:02.045287  329090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-756339 embed-certs-756339
	I1123 08:45:02.095267  329090 network_create.go:108] docker network embed-certs-756339 192.168.103.0/24 created
	I1123 08:45:02.095296  329090 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-756339" container
	I1123 08:45:02.095350  329090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:45:02.111533  329090 cli_runner.go:164] Run: docker volume create embed-certs-756339 --label name.minikube.sigs.k8s.io=embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:45:02.128824  329090 oci.go:103] Successfully created a docker volume embed-certs-756339
	I1123 08:45:02.128896  329090 cli_runner.go:164] Run: docker run --rm --name embed-certs-756339-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --entrypoint /usr/bin/test -v embed-certs-756339:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:45:02.559029  329090 oci.go:107] Successfully prepared a docker volume embed-certs-756339
	I1123 08:45:02.559098  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:02.559108  329090 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:45:02.559163  329090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 08:45:06.464312  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:08.466215  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:06.707246  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:09.040137  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:11.046122  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:07.131448  329090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.572224972s)
	I1123 08:45:07.131484  329090 kic.go:203] duration metric: took 4.572370498s to extract preloaded images to volume ...
	W1123 08:45:07.131573  329090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:45:07.131616  329090 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:45:07.131860  329090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:45:07.219659  329090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-756339 --name embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-756339 --network embed-certs-756339 --ip 192.168.103.2 --volume embed-certs-756339:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:45:07.635482  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Running}}
	I1123 08:45:07.658965  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.681327  329090 cli_runner.go:164] Run: docker exec embed-certs-756339 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:45:07.737769  329090 oci.go:144] the created container "embed-certs-756339" has a running status.
	I1123 08:45:07.737802  329090 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa...
	I1123 08:45:07.895228  329090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:45:07.935222  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.958382  329090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:45:07.958405  329090 kic_runner.go:114] Args: [docker exec --privileged embed-certs-756339 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:45:08.015520  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:08.039803  329090 machine.go:94] provisionDockerMachine start ...
	I1123 08:45:08.039898  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:08.064345  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:08.064680  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:08.064723  329090 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:45:08.065347  329090 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47890->127.0.0.1:33131: read: connection reset by peer
	I1123 08:45:11.244730  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.244755  329090 ubuntu.go:182] provisioning hostname "embed-certs-756339"
	I1123 08:45:11.244812  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.273763  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.274055  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.274072  329090 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-756339 && echo "embed-certs-756339" | sudo tee /etc/hostname
	I1123 08:45:11.457570  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.457714  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.488146  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.488457  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.488485  329090 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-756339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-756339/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-756339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:45:11.660198  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:45:11.660362  329090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:45:11.660453  329090 ubuntu.go:190] setting up certificates
	I1123 08:45:11.660471  329090 provision.go:84] configureAuth start
	I1123 08:45:11.661011  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:11.684982  329090 provision.go:143] copyHostCerts
	I1123 08:45:11.685043  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:45:11.685053  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:45:11.685140  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:45:11.685249  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:45:11.685255  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:45:11.685292  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:45:11.685383  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:45:11.685391  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:45:11.685427  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:45:11.685506  329090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.embed-certs-756339 san=[127.0.0.1 192.168.103.2 embed-certs-756339 localhost minikube]
	I1123 08:45:11.758697  329090 provision.go:177] copyRemoteCerts
	I1123 08:45:11.758777  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:45:11.758833  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.787179  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:11.905965  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:45:11.934744  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:45:11.961707  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:45:11.985963  329090 provision.go:87] duration metric: took 325.479379ms to configureAuth
	I1123 08:45:11.985992  329090 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:45:11.986220  329090 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:11.986358  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.011499  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:12.011833  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:12.011872  329090 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:45:12.373361  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:45:12.373388  329090 machine.go:97] duration metric: took 4.333562614s to provisionDockerMachine
	I1123 08:45:12.373402  329090 client.go:176] duration metric: took 10.387301049s to LocalClient.Create
	I1123 08:45:12.373431  329090 start.go:167] duration metric: took 10.387376613s to libmachine.API.Create "embed-certs-756339"
	I1123 08:45:12.373444  329090 start.go:293] postStartSetup for "embed-certs-756339" (driver="docker")
	I1123 08:45:12.373458  329090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:45:12.373521  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:45:12.373575  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.394472  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.505303  329090 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:45:12.509881  329090 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:45:12.509946  329090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:45:12.509962  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:45:12.510025  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:45:12.510127  329090 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:45:12.510256  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:45:12.520339  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:45:12.547586  329090 start.go:296] duration metric: took 174.127267ms for postStartSetup
	I1123 08:45:12.548040  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.572325  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:12.572597  329090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:45:12.572652  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.595241  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.708576  329090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:45:12.713786  329090 start.go:128] duration metric: took 10.729979645s to createHost
	I1123 08:45:12.713812  329090 start.go:83] releasing machines lock for "embed-certs-756339", held for 10.730153164s
	I1123 08:45:12.713888  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.744434  329090 ssh_runner.go:195] Run: cat /version.json
	I1123 08:45:12.744496  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.744678  329090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:45:12.744776  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.771659  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.771722  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.970377  329090 ssh_runner.go:195] Run: systemctl --version
	I1123 08:45:12.980003  329090 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:45:13.031076  329090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:45:13.037986  329090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:45:13.038091  329090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:45:13.078655  329090 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:45:13.078678  329090 start.go:496] detecting cgroup driver to use...
	I1123 08:45:13.078778  329090 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:45:13.078826  329090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:45:13.102501  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:45:13.121011  329090 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:45:13.121088  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:45:13.144025  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:45:13.166610  329090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:45:13.266885  329090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:45:13.383738  329090 docker.go:234] disabling docker service ...
	I1123 08:45:13.383808  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:45:13.408902  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:45:13.425055  329090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:45:13.533375  329090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:45:13.641970  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:45:13.655349  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:45:13.672802  329090 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:45:13.672859  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.682619  329090 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:45:13.682671  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.691340  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.700633  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.709880  329090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:45:13.717844  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.726872  329090 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.741035  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.750011  329090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:45:13.757738  329090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:45:13.764834  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:13.846176  329090 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:45:15.041719  329090 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.195506975s)
	I1123 08:45:15.041743  329090 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:45:15.041806  329090 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:45:15.046071  329090 start.go:564] Will wait 60s for crictl version
	I1123 08:45:15.046136  329090 ssh_runner.go:195] Run: which crictl
	I1123 08:45:15.049573  329090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:45:15.078843  329090 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:45:15.078920  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.108962  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.139712  329090 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 08:45:10.968346  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.466785  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.540283  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:16.038123  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:15.141197  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:15.159501  329090 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 08:45:15.163431  329090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:15.173476  329090 kubeadm.go:884] updating cluster {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:45:15.173575  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:15.173616  329090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:15.210172  329090 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:45:15.210193  329090 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:45:15.210244  329090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:15.237085  329090 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:45:15.237104  329090 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:45:15.237113  329090 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1123 08:45:15.237217  329090 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-756339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:45:15.237295  329090 ssh_runner.go:195] Run: crio config
	I1123 08:45:15.283601  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:15.283625  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:15.283643  329090 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:45:15.283669  329090 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-756339 NodeName:embed-certs-756339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:45:15.283837  329090 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-756339"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:45:15.283904  329090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:45:15.292504  329090 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:45:15.292566  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:45:15.300378  329090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 08:45:15.312974  329090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:45:15.327882  329090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1123 08:45:15.340181  329090 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:45:15.343646  329090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:15.354110  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:15.443097  329090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:15.467751  329090 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339 for IP: 192.168.103.2
	I1123 08:45:15.467775  329090 certs.go:195] generating shared ca certs ...
	I1123 08:45:15.467794  329090 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.467944  329090 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:45:15.468013  329090 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:45:15.468026  329090 certs.go:257] generating profile certs ...
	I1123 08:45:15.468092  329090 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key
	I1123 08:45:15.468108  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt with IP's: []
	I1123 08:45:15.681556  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt ...
	I1123 08:45:15.681578  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt: {Name:mk22797cd88ef1f778f787e25af3588a79d11855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.681755  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key ...
	I1123 08:45:15.681771  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key: {Name:mk2507e79a5f05fa7cb11db2054cd014292902df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.681880  329090 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354
	I1123 08:45:15.681896  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1123 08:45:15.727484  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 ...
	I1123 08:45:15.727506  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354: {Name:mkade0e3ba918afced6504828d64527edcb7e06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.727677  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354 ...
	I1123 08:45:15.727718  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354: {Name:mke39adf49845e1231f060e2780420238d4a87bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.727834  329090 certs.go:382] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt
	I1123 08:45:15.727927  329090 certs.go:386] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key
	I1123 08:45:15.728008  329090 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key
	I1123 08:45:15.728025  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt with IP's: []
	I1123 08:45:15.834669  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt ...
	I1123 08:45:15.834720  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt: {Name:mkad5e6304235e6d8f0ebd086b0ccf458022d6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.834861  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key ...
	I1123 08:45:15.834879  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key: {Name:mka603d9600779233619dbc354e88b03aa5d1f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.835045  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:45:15.835081  329090 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:45:15.835092  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:45:15.835118  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:45:15.835142  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:45:15.835178  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:45:15.835218  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:45:15.835729  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:45:15.855139  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:45:15.873868  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:45:15.894547  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:45:15.912933  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:45:15.930981  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:45:15.949401  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:45:15.970429  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:45:15.989205  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:45:16.008793  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:45:16.025737  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:45:16.043175  329090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:45:16.055931  329090 ssh_runner.go:195] Run: openssl version
	I1123 08:45:16.061639  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:45:16.069652  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.073176  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.073220  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.108921  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:45:16.116885  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:45:16.124882  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.128591  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.128656  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.185316  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:45:16.195245  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:45:16.206667  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.211327  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.211374  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.251180  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:45:16.260175  329090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:45:16.264022  329090 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:45:16.264083  329090 kubeadm.go:401] StartCluster: {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:16.264171  329090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:45:16.264218  329090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:45:16.292235  329090 cri.go:89] found id: ""
	I1123 08:45:16.292292  329090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:45:16.300794  329090 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:45:16.308741  329090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:45:16.308794  329090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:45:16.316404  329090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:45:16.316422  329090 kubeadm.go:158] found existing configuration files:
	
	I1123 08:45:16.316458  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:45:16.324309  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:45:16.324349  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:45:16.332260  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:45:16.340786  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:45:16.340842  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:45:16.348658  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:45:16.358536  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:45:16.358583  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:45:16.368595  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:45:16.377891  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:45:16.377952  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:45:16.386029  329090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:45:16.424131  329090 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:45:16.424226  329090 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:45:16.444456  329090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:45:16.444527  329090 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:45:16.444572  329090 kubeadm.go:319] OS: Linux
	I1123 08:45:16.444654  329090 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:45:16.444763  329090 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:45:16.444824  329090 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:45:16.444916  329090 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:45:16.444986  329090 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:45:16.445059  329090 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:45:16.445128  329090 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:45:16.445197  329090 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:45:16.502432  329090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:45:16.502566  329090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:45:16.502717  329090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:45:16.512573  329090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:45:16.514857  329090 out.go:252]   - Generating certificates and keys ...
	I1123 08:45:16.514990  329090 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:45:16.515094  329090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:45:16.608081  329090 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:45:16.680528  329090 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:45:16.801156  329090 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:45:17.144723  329090 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:45:17.391838  329090 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:45:17.392042  329090 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-756339 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:45:17.447222  329090 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:45:17.447383  329090 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-756339 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:45:17.644625  329090 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:45:17.916674  329090 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:45:18.538498  329090 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:45:18.538728  329090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:45:18.967277  329090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:45:19.377546  329090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:45:19.559622  329090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:45:20.075738  329090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:45:20.364836  329090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:45:20.365389  329090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:45:20.380029  329090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1123 08:45:15.964678  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:18.463898  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:18.038557  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:20.040142  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:20.381602  329090 out.go:252]   - Booting up control plane ...
	I1123 08:45:20.381763  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:45:20.381900  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:45:20.382610  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:45:20.395865  329090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:45:20.396015  329090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:45:20.402081  329090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:45:20.402378  329090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:45:20.402436  329090 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:45:20.508331  329090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:45:20.508495  329090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:45:22.009994  329090 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501781773s
	I1123 08:45:22.014389  329090 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:45:22.014519  329090 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1123 08:45:22.014637  329090 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:45:22.014773  329090 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:45:23.091748  329090 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.077310791s
	I1123 08:45:23.589008  329090 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.574535055s
	I1123 08:45:25.015461  329090 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001048624s
	I1123 08:45:25.026445  329090 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:45:25.036344  329090 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:45:25.045136  329090 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:45:25.045341  329090 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-756339 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:45:25.052213  329090 kubeadm.go:319] [bootstrap-token] Using token: jh7osp.28agjpkabxiw65fh
	W1123 08:45:20.963406  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:22.964352  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:22.538516  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:24.539132  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:25.055029  329090 out.go:252]   - Configuring RBAC rules ...
	I1123 08:45:25.055175  329090 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:45:25.058117  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:45:25.062975  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:45:25.066360  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:45:25.069196  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:45:25.071492  329090 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:45:25.419913  329090 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:45:25.836463  329090 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:45:26.420358  329090 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:45:26.421135  329090 kubeadm.go:319] 
	I1123 08:45:26.421252  329090 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:45:26.421277  329090 kubeadm.go:319] 
	I1123 08:45:26.421378  329090 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:45:26.421390  329090 kubeadm.go:319] 
	I1123 08:45:26.421426  329090 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:45:26.421521  329090 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:45:26.421603  329090 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:45:26.421620  329090 kubeadm.go:319] 
	I1123 08:45:26.421735  329090 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:45:26.421746  329090 kubeadm.go:319] 
	I1123 08:45:26.421806  329090 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:45:26.421815  329090 kubeadm.go:319] 
	I1123 08:45:26.421881  329090 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:45:26.421994  329090 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:45:26.422098  329090 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:45:26.422107  329090 kubeadm.go:319] 
	I1123 08:45:26.422206  329090 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:45:26.422316  329090 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:45:26.422325  329090 kubeadm.go:319] 
	I1123 08:45:26.422429  329090 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jh7osp.28agjpkabxiw65fh \
	I1123 08:45:26.422527  329090 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c \
	I1123 08:45:26.422562  329090 kubeadm.go:319] 	--control-plane 
	I1123 08:45:26.422571  329090 kubeadm.go:319] 
	I1123 08:45:26.422711  329090 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:45:26.422722  329090 kubeadm.go:319] 
	I1123 08:45:26.422841  329090 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jh7osp.28agjpkabxiw65fh \
	I1123 08:45:26.422947  329090 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c 
	I1123 08:45:26.425509  329090 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:45:26.425638  329090 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:45:26.425665  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:26.425679  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:26.427041  329090 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:45:26.427891  329090 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:45:26.432307  329090 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:45:26.432326  329090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:45:26.445364  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:45:26.642490  329090 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:45:26.642551  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:26.642592  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-756339 minikube.k8s.io/updated_at=2025_11_23T08_45_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=embed-certs-756339 minikube.k8s.io/primary=true
	I1123 08:45:26.729263  329090 ops.go:34] apiserver oom_adj: -16
	I1123 08:45:26.729393  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 08:45:25.464467  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:27.964097  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:26.539240  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:29.038507  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:27.229843  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:27.730298  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:28.230009  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:28.730490  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:29.229984  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:29.730299  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:30.229522  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:30.729582  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:31.230290  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:31.293892  329090 kubeadm.go:1114] duration metric: took 4.651396638s to wait for elevateKubeSystemPrivileges
	I1123 08:45:31.293931  329090 kubeadm.go:403] duration metric: took 15.029851328s to StartCluster
	I1123 08:45:31.293953  329090 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:31.294038  329090 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:31.295585  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:31.295872  329090 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:31.295936  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:45:31.296007  329090 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:45:31.296114  329090 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-756339"
	I1123 08:45:31.296118  329090 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:31.296134  329090 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-756339"
	I1123 08:45:31.296128  329090 addons.go:70] Setting default-storageclass=true in profile "embed-certs-756339"
	I1123 08:45:31.296166  329090 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-756339"
	I1123 08:45:31.296176  329090 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:45:31.296604  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.296720  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.297232  329090 out.go:179] * Verifying Kubernetes components...
	I1123 08:45:31.299135  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:31.322679  329090 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:45:31.324511  329090 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:31.324536  329090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:45:31.324593  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:31.329451  329090 addons.go:239] Setting addon default-storageclass=true in "embed-certs-756339"
	I1123 08:45:31.329500  329090 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:45:31.330018  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.359473  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:31.359508  329090 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:31.359523  329090 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:45:31.359576  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:31.383150  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:31.400104  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:45:31.438850  329090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:31.477184  329090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:31.500079  329090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:31.590832  329090 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:45:31.592356  329090 node_ready.go:35] waiting up to 6m0s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:45:31.806094  329090 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 08:45:30.466331  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:32.963158  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:34.963993  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:31.541665  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:34.038345  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:31.807238  329090 addons.go:530] duration metric: took 511.238501ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:45:32.094332  329090 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-756339" context rescaled to 1 replicas
	W1123 08:45:33.595476  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:36.094914  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:37.463401  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:39.463744  323135 pod_ready.go:94] pod "coredns-66bc5c9577-8f8f5" is "Ready"
	I1123 08:45:39.463771  323135 pod_ready.go:86] duration metric: took 37.505301624s for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.466073  323135 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.469881  323135 pod_ready.go:94] pod "etcd-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.469907  323135 pod_ready.go:86] duration metric: took 3.813451ms for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.471783  323135 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.475591  323135 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.475615  323135 pod_ready.go:86] duration metric: took 3.808626ms for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.477543  323135 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.662072  323135 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.662095  323135 pod_ready.go:86] duration metric: took 184.532328ms for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.861972  323135 pod_ready.go:83] waiting for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.262090  323135 pod_ready.go:94] pod "kube-proxy-sn4sp" is "Ready"
	I1123 08:45:40.262116  323135 pod_ready.go:86] duration metric: took 400.120277ms for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.462054  323135 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.862186  323135 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:40.862212  323135 pod_ready.go:86] duration metric: took 400.136767ms for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.862222  323135 pod_ready.go:40] duration metric: took 38.907156113s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:40.906296  323135 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:40.908135  323135 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-726261" cluster and "default" namespace by default
	W1123 08:45:36.537535  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:38.537920  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:40.537903  323816 pod_ready.go:94] pod "coredns-66bc5c9577-khlrk" is "Ready"
	I1123 08:45:40.537927  323816 pod_ready.go:86] duration metric: took 38.004948026s for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.540197  323816 pod_ready.go:83] waiting for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.543594  323816 pod_ready.go:94] pod "etcd-no-preload-187607" is "Ready"
	I1123 08:45:40.543613  323816 pod_ready.go:86] duration metric: took 3.39504ms for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.545430  323816 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.548523  323816 pod_ready.go:94] pod "kube-apiserver-no-preload-187607" is "Ready"
	I1123 08:45:40.548540  323816 pod_ready.go:86] duration metric: took 3.086438ms for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.550144  323816 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.736784  323816 pod_ready.go:94] pod "kube-controller-manager-no-preload-187607" is "Ready"
	I1123 08:45:40.736810  323816 pod_ready.go:86] duration metric: took 186.650289ms for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.936965  323816 pod_ready.go:83] waiting for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:38.095893  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:40.595721  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	I1123 08:45:41.336483  323816 pod_ready.go:94] pod "kube-proxy-f9d8j" is "Ready"
	I1123 08:45:41.336508  323816 pod_ready.go:86] duration metric: took 399.518187ms for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.536451  323816 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.936068  323816 pod_ready.go:94] pod "kube-scheduler-no-preload-187607" is "Ready"
	I1123 08:45:41.936095  323816 pod_ready.go:86] duration metric: took 399.617585ms for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.936110  323816 pod_ready.go:40] duration metric: took 39.406642608s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:41.977753  323816 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:41.979147  323816 out.go:179] * Done! kubectl is now configured to use "no-preload-187607" cluster and "default" namespace by default
	I1123 08:45:43.095643  329090 node_ready.go:49] node "embed-certs-756339" is "Ready"
	I1123 08:45:43.095676  329090 node_ready.go:38] duration metric: took 11.503297149s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:45:43.095722  329090 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:45:43.095787  329090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:45:43.107848  329090 api_server.go:72] duration metric: took 11.811934824s to wait for apiserver process to appear ...
	I1123 08:45:43.107869  329090 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:43.107884  329090 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:45:43.112629  329090 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:45:43.113413  329090 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:43.113433  329090 api_server.go:131] duration metric: took 5.559653ms to wait for apiserver health ...
	I1123 08:45:43.113441  329090 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:43.116485  329090 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:43.116510  329090 system_pods.go:61] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.116515  329090 system_pods.go:61] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.116520  329090 system_pods.go:61] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.116525  329090 system_pods.go:61] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.116532  329090 system_pods.go:61] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.116536  329090 system_pods.go:61] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.116539  329090 system_pods.go:61] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.116545  329090 system_pods.go:61] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.116550  329090 system_pods.go:74] duration metric: took 3.105251ms to wait for pod list to return data ...
	I1123 08:45:43.116558  329090 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:43.118523  329090 default_sa.go:45] found service account: "default"
	I1123 08:45:43.118538  329090 default_sa.go:55] duration metric: took 1.974886ms for default service account to be created ...
	I1123 08:45:43.118545  329090 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:43.120780  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.120802  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.120810  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.120815  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.120819  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.120826  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.120831  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.120834  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.120839  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.120863  329090 retry.go:31] will retry after 215.602357ms: missing components: kube-dns
	I1123 08:45:43.340425  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.340455  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.340462  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.340467  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.340472  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.340477  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.340480  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.340483  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.340488  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.340504  329090 retry.go:31] will retry after 325.287893ms: missing components: kube-dns
	I1123 08:45:43.668913  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.668952  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.668962  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.668971  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.668977  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.668983  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.668987  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.668993  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.669002  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.669025  329090 retry.go:31] will retry after 462.937798ms: missing components: kube-dns
	I1123 08:45:44.135919  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:44.135950  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Running
	I1123 08:45:44.135957  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:44.135962  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:44.135967  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:44.135972  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:44.135977  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:44.135983  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:44.135988  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Running
	I1123 08:45:44.135997  329090 system_pods.go:126] duration metric: took 1.017446384s to wait for k8s-apps to be running ...
	I1123 08:45:44.136008  329090 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:44.136053  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:44.148387  329090 system_svc.go:56] duration metric: took 12.375192ms WaitForService to wait for kubelet
	I1123 08:45:44.148408  329090 kubeadm.go:587] duration metric: took 12.85249816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:44.148426  329090 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:44.150884  329090 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:44.150906  329090 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:44.150923  329090 node_conditions.go:105] duration metric: took 2.493335ms to run NodePressure ...
	I1123 08:45:44.150933  329090 start.go:242] waiting for startup goroutines ...
	I1123 08:45:44.150943  329090 start.go:247] waiting for cluster config update ...
	I1123 08:45:44.150953  329090 start.go:256] writing updated cluster config ...
	I1123 08:45:44.151188  329090 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:44.154964  329090 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:44.158442  329090 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.162122  329090 pod_ready.go:94] pod "coredns-66bc5c9577-ffmn2" is "Ready"
	I1123 08:45:44.162139  329090 pod_ready.go:86] duration metric: took 3.680173ms for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.163781  329090 pod_ready.go:83] waiting for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.167030  329090 pod_ready.go:94] pod "etcd-embed-certs-756339" is "Ready"
	I1123 08:45:44.167046  329090 pod_ready.go:86] duration metric: took 3.249458ms for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.168620  329090 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.171889  329090 pod_ready.go:94] pod "kube-apiserver-embed-certs-756339" is "Ready"
	I1123 08:45:44.171905  329090 pod_ready.go:86] duration metric: took 3.265991ms for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.173681  329090 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.558804  329090 pod_ready.go:94] pod "kube-controller-manager-embed-certs-756339" is "Ready"
	I1123 08:45:44.558838  329090 pod_ready.go:86] duration metric: took 385.124392ms for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.759793  329090 pod_ready.go:83] waiting for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.158864  329090 pod_ready.go:94] pod "kube-proxy-npnsh" is "Ready"
	I1123 08:45:45.158887  329090 pod_ready.go:86] duration metric: took 399.071703ms for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.360200  329090 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.758770  329090 pod_ready.go:94] pod "kube-scheduler-embed-certs-756339" is "Ready"
	I1123 08:45:45.758800  329090 pod_ready.go:86] duration metric: took 398.571969ms for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.758811  329090 pod_ready.go:40] duration metric: took 1.603821403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:45.800049  329090 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:45.802064  329090 out.go:179] * Done! kubectl is now configured to use "embed-certs-756339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:45:14 no-preload-187607 crio[568]: time="2025-11-23T08:45:14.50697807Z" level=info msg="Started container" PID=1700 containerID=8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper id=466b3896-a7bb-4df4-a299-2d9ff390e087 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f252061dd11e58b7aa8da24165aadd581d83651ba9757de92900f6f4f523e628
	Nov 23 08:45:15 no-preload-187607 crio[568]: time="2025-11-23T08:45:15.355999571Z" level=info msg="Removing container: a978dc4166a0721e1ca3efca1a9e1e1fbeacd5219eeb0819615d697554d3b861" id=486c8124-f382-4a0d-9139-94b7b3be9d96 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:15 no-preload-187607 crio[568]: time="2025-11-23T08:45:15.368021143Z" level=info msg="Removed container a978dc4166a0721e1ca3efca1a9e1e1fbeacd5219eeb0819615d697554d3b861: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper" id=486c8124-f382-4a0d-9139-94b7b3be9d96 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.266434756Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=51cbe557-24fb-42da-b92b-d6e701ff3283 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.267396946Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=87ee8c00-0fdd-4ede-9ee1-bfdb4a936481 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.268463762Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper" id=87c84b8b-1c0b-4c53-a6fa-d3bf0a98fe8a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.268584894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.274155112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.274609227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.314154409Z" level=info msg="Created container 9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper" id=87c84b8b-1c0b-4c53-a6fa-d3bf0a98fe8a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.314908714Z" level=info msg="Starting container: 9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f" id=b1cae5ed-3a16-4dcc-b84f-b280bd38f198 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.316517916Z" level=info msg="Started container" PID=1712 containerID=9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper id=b1cae5ed-3a16-4dcc-b84f-b280bd38f198 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f252061dd11e58b7aa8da24165aadd581d83651ba9757de92900f6f4f523e628
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.395618687Z" level=info msg="Removing container: 8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1" id=aa24d49c-8966-49e8-965d-93f38b45d5ae name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.404390104Z" level=info msg="Removed container 8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper" id=aa24d49c-8966-49e8-965d-93f38b45d5ae name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.406235352Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=af273b10-4dbe-424f-b344-7753d2d80b7d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.407076382Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c19f2f41-c061-4d0d-9412-6ed71c0f5748 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.408118241Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=00d499b5-edf6-4c7d-9367-374f68a895e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.408242394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.413383581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.41357001Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ff35123aea3a53ef0d6d170a28b288aedbfc75ac397c4854faa2cb30a8f8fd89/merged/etc/passwd: no such file or directory"
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.413595711Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ff35123aea3a53ef0d6d170a28b288aedbfc75ac397c4854faa2cb30a8f8fd89/merged/etc/group: no such file or directory"
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.413889719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.444669211Z" level=info msg="Created container c6c270dccd32c502da3fafcf547f6f6714b0f3418167733e063f8a10411f3214: kube-system/storage-provisioner/storage-provisioner" id=00d499b5-edf6-4c7d-9367-374f68a895e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.445194646Z" level=info msg="Starting container: c6c270dccd32c502da3fafcf547f6f6714b0f3418167733e063f8a10411f3214" id=49bd0dc8-16ae-43cb-b1f8-02b27a04a4b9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.447037721Z" level=info msg="Started container" PID=1726 containerID=c6c270dccd32c502da3fafcf547f6f6714b0f3418167733e063f8a10411f3214 description=kube-system/storage-provisioner/storage-provisioner id=49bd0dc8-16ae-43cb-b1f8-02b27a04a4b9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=23e97d90cf1f092a96d4535340b59be9917de2f5ef91878c69266fdc45d0b634
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c6c270dccd32c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   23e97d90cf1f0       storage-provisioner                          kube-system
	9a28a511032a2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   f252061dd11e5       dashboard-metrics-scraper-6ffb444bf9-hcb2b   kubernetes-dashboard
	f6600a361a3ba       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   f02426d39a6cb       kubernetes-dashboard-855c9754f9-c25qj        kubernetes-dashboard
	228561f3c58a8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   24905f1565670       busybox                                      default
	0f65ae30d25f0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   63384f1b6547d       coredns-66bc5c9577-khlrk                     kube-system
	c7d79d91171ad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   340cae71ce404       kindnet-67c62                                kube-system
	3c52daba0a02a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   23e97d90cf1f0       storage-provisioner                          kube-system
	4aa18c92f3f57       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   4ab9e21b47c7c       kube-proxy-f9d8j                             kube-system
	82c67fc0d0d50       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   b4360f999ab57       etcd-no-preload-187607                       kube-system
	58bccd8b52572       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   9a4549b6d8756       kube-apiserver-no-preload-187607             kube-system
	f7dc3b2c3eb35       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   de9ba1fc3fa90       kube-controller-manager-no-preload-187607    kube-system
	f9c1a46853ec5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   514d893b96883       kube-scheduler-no-preload-187607             kube-system
	
	
	==> coredns [0f65ae30d25f0e6796dc383f8f723ee3a043d903cea5be75fb9ad29429a39fa0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53876 - 10530 "HINFO IN 8572700105362580089.124782930035999275. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.088248895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-187607
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-187607
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=no-preload-187607
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:44:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-187607
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:45:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:45:31 +0000   Sun, 23 Nov 2025 08:43:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:45:31 +0000   Sun, 23 Nov 2025 08:43:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:45:31 +0000   Sun, 23 Nov 2025 08:43:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:45:31 +0000   Sun, 23 Nov 2025 08:44:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-187607
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                156073dd-043d-48c6-8d6c-0e5326137d17
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-khlrk                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-187607                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-67c62                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-187607              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-187607     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-f9d8j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-187607              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-hcb2b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c25qj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node no-preload-187607 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node no-preload-187607 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node no-preload-187607 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node no-preload-187607 event: Registered Node no-preload-187607 in Controller
	  Normal  NodeReady                95s                kubelet          Node no-preload-187607 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node no-preload-187607 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node no-preload-187607 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node no-preload-187607 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node no-preload-187607 event: Registered Node no-preload-187607 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [82c67fc0d0d50ab08e241d39a2087b1c3e8bc3f645f3bfdeeb79a7ab0f98af22] <==
	{"level":"info","ts":"2025-11-23T08:45:05.539225Z","caller":"traceutil/trace.go:172","msg":"trace[1643612090] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"187.246433ms","start":"2025-11-23T08:45:05.351964Z","end":"2025-11-23T08:45:05.539210Z","steps":["trace[1643612090] 'process raft request'  (duration: 187.155382ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:05.539255Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.313852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" limit:1 ","response":"range_response_count:1 size:4430"}
	{"level":"info","ts":"2025-11-23T08:45:05.539215Z","caller":"traceutil/trace.go:172","msg":"trace[2122619234] transaction","detail":"{read_only:false; response_revision:492; number_of_response:1; }","duration":"188.248399ms","start":"2025-11-23T08:45:05.350949Z","end":"2025-11-23T08:45:05.539197Z","steps":["trace[2122619234] 'process raft request'  (duration: 188.13882ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.539295Z","caller":"traceutil/trace.go:172","msg":"trace[99005969] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper; range_end:; response_count:1; response_revision:493; }","duration":"109.360342ms","start":"2025-11-23T08:45:05.429922Z","end":"2025-11-23T08:45:05.539282Z","steps":["trace[99005969] 'agreement among raft nodes before linearized reading'  (duration: 109.238345ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.539335Z","caller":"traceutil/trace.go:172","msg":"trace[523077530] transaction","detail":"{read_only:false; response_revision:491; number_of_response:1; }","duration":"188.40504ms","start":"2025-11-23T08:45:05.350911Z","end":"2025-11-23T08:45:05.539316Z","steps":["trace[523077530] 'process raft request'  (duration: 185.201162ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728096Z","caller":"traceutil/trace.go:172","msg":"trace[205968407] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"183.628429ms","start":"2025-11-23T08:45:05.544448Z","end":"2025-11-23T08:45:05.728077Z","steps":["trace[205968407] 'process raft request'  (duration: 180.295065ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728240Z","caller":"traceutil/trace.go:172","msg":"trace[814500553] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"182.049169ms","start":"2025-11-23T08:45:05.546175Z","end":"2025-11-23T08:45:05.728224Z","steps":["trace[814500553] 'process raft request'  (duration: 182.01285ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728270Z","caller":"traceutil/trace.go:172","msg":"trace[1579071933] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"182.839414ms","start":"2025-11-23T08:45:05.545418Z","end":"2025-11-23T08:45:05.728257Z","steps":["trace[1579071933] 'process raft request'  (duration: 182.643262ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728351Z","caller":"traceutil/trace.go:172","msg":"trace[1017226904] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"183.90296ms","start":"2025-11-23T08:45:05.544434Z","end":"2025-11-23T08:45:05.728337Z","steps":["trace[1017226904] 'process raft request'  (duration: 183.557154ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728355Z","caller":"traceutil/trace.go:172","msg":"trace[1557003027] transaction","detail":"{read_only:false; response_revision:498; number_of_response:1; }","duration":"182.822296ms","start":"2025-11-23T08:45:05.545525Z","end":"2025-11-23T08:45:05.728347Z","steps":["trace[1557003027] 'process raft request'  (duration: 182.583427ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728450Z","caller":"traceutil/trace.go:172","msg":"trace[1857130239] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"182.672715ms","start":"2025-11-23T08:45:05.545765Z","end":"2025-11-23T08:45:05.728438Z","steps":["trace[1857130239] 'process raft request'  (duration: 182.381249ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.730803Z","caller":"traceutil/trace.go:172","msg":"trace[672235072] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"108.455123ms","start":"2025-11-23T08:45:05.622338Z","end":"2025-11-23T08:45:05.730793Z","steps":["trace[672235072] 'process raft request'  (duration: 108.388894ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.892455Z","caller":"traceutil/trace.go:172","msg":"trace[2018686248] transaction","detail":"{read_only:false; response_revision:502; number_of_response:1; }","duration":"159.424324ms","start":"2025-11-23T08:45:05.733014Z","end":"2025-11-23T08:45:05.892438Z","steps":["trace[2018686248] 'process raft request'  (duration: 129.795449ms)","trace[2018686248] 'compare'  (duration: 29.525603ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:05.906741Z","caller":"traceutil/trace.go:172","msg":"trace[1489864518] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"173.71283ms","start":"2025-11-23T08:45:05.733012Z","end":"2025-11-23T08:45:05.906725Z","steps":["trace[1489864518] 'process raft request'  (duration: 173.551406ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.906769Z","caller":"traceutil/trace.go:172","msg":"trace[1809762921] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"172.246802ms","start":"2025-11-23T08:45:05.734514Z","end":"2025-11-23T08:45:05.906761Z","steps":["trace[1809762921] 'process raft request'  (duration: 172.155005ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.906846Z","caller":"traceutil/trace.go:172","msg":"trace[666099855] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"170.96402ms","start":"2025-11-23T08:45:05.735866Z","end":"2025-11-23T08:45:05.906830Z","steps":["trace[666099855] 'process raft request'  (duration: 170.866633ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.906964Z","caller":"traceutil/trace.go:172","msg":"trace[1645553263] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"172.458656ms","start":"2025-11-23T08:45:05.734495Z","end":"2025-11-23T08:45:05.906953Z","steps":["trace[1645553263] 'process raft request'  (duration: 172.130118ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:06.131295Z","caller":"traceutil/trace.go:172","msg":"trace[1244225681] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"218.366799ms","start":"2025-11-23T08:45:05.912907Z","end":"2025-11-23T08:45:06.131273Z","steps":["trace[1244225681] 'process raft request'  (duration: 183.582038ms)","trace[1244225681] 'compare'  (duration: 34.623056ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:06.131391Z","caller":"traceutil/trace.go:172","msg":"trace[1945324553] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"217.240667ms","start":"2025-11-23T08:45:05.914138Z","end":"2025-11-23T08:45:06.131378Z","steps":["trace[1945324553] 'process raft request'  (duration: 217.075776ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:06.400508Z","caller":"traceutil/trace.go:172","msg":"trace[1194367375] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"213.784788ms","start":"2025-11-23T08:45:06.186709Z","end":"2025-11-23T08:45:06.400494Z","steps":["trace[1194367375] 'process raft request'  (duration: 207.446676ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:06.703653Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"169.232216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-khlrk\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-11-23T08:45:06.703735Z","caller":"traceutil/trace.go:172","msg":"trace[906958138] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-khlrk; range_end:; response_count:1; response_revision:514; }","duration":"169.322892ms","start":"2025-11-23T08:45:06.534400Z","end":"2025-11-23T08:45:06.703722Z","steps":["trace[906958138] 'agreement among raft nodes before linearized reading'  (duration: 27.00419ms)","trace[906958138] 'range keys from in-memory index tree'  (duration: 142.143279ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:06.704111Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.172965ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361486681663 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/no-preload-187607.187a96558e9abf34\" mod_revision:512 > success:<request_put:<key:\"/registry/events/default/no-preload-187607.187a96558e9abf34\" value_size:624 lease:6571766361486681580 >> failure:<request_range:<key:\"/registry/events/default/no-preload-187607.187a96558e9abf34\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:45:06.704186Z","caller":"traceutil/trace.go:172","msg":"trace[2086217684] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"274.391641ms","start":"2025-11-23T08:45:06.429782Z","end":"2025-11-23T08:45:06.704174Z","steps":["trace[2086217684] 'process raft request'  (duration: 131.67672ms)","trace[2086217684] 'compare'  (duration: 142.099979ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:06.989318Z","caller":"traceutil/trace.go:172","msg":"trace[939907356] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"166.727737ms","start":"2025-11-23T08:45:06.822571Z","end":"2025-11-23T08:45:06.989299Z","steps":["trace[939907356] 'process raft request'  (duration: 127.63493ms)","trace[939907356] 'compare'  (duration: 39.005041ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:45:56 up  1:28,  0 user,  load average: 3.69, 3.74, 2.45
	Linux no-preload-187607 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c7d79d91171ad2356ff4429be5853d33c2d0b45d87251302f6d1b783580ef9ee] <==
	I1123 08:45:03.331754       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:45:03.332049       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 08:45:03.332333       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:45:03.332362       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:45:03.332427       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:45:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:45:03.535850       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:45:03.535874       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:45:03.535886       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:45:03.536074       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:45:03.836803       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:45:03.836831       1 metrics.go:72] Registering metrics
	I1123 08:45:03.836879       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:13.535778       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:13.535834       1 main.go:301] handling current node
	I1123 08:45:23.540770       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:23.540818       1 main.go:301] handling current node
	I1123 08:45:33.536218       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:33.536257       1 main.go:301] handling current node
	I1123 08:45:43.537757       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:43.537799       1 main.go:301] handling current node
	I1123 08:45:53.544769       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:53.544805       1 main.go:301] handling current node
	
	
	==> kube-apiserver [58bccd8b525725bf0e119a031f7704340d4a582f1f9d22e35700e56c5414fc15] <==
	I1123 08:45:01.291031       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:45:01.296909       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:45:01.301461       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 08:45:01.301563       1 policy_source.go:240] refreshing policies
	I1123 08:45:01.302385       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 08:45:01.303386       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:45:01.305784       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 08:45:01.305877       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:45:01.306740       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:45:01.306760       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:45:01.306769       1 cache.go:39] Caches are synced for autoregister controller
	E1123 08:45:01.324255       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:45:01.337257       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:45:01.337565       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:45:01.357385       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:45:01.714332       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:45:01.748302       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:45:01.774043       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:45:01.785408       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:45:01.836001       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.136.179"}
	I1123 08:45:01.847637       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.66.152"}
	I1123 08:45:02.180656       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:45:04.806042       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:45:05.003356       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:45:05.251887       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f7dc3b2c3eb35a85ed7f46e5a51507d750e9e62d6d4e5f5d8cf809a595a3fbb5] <==
	I1123 08:45:04.559864       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:45:04.559892       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:45:04.559924       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:45:04.561557       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:45:04.574936       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:04.577070       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:45:04.599603       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:04.599623       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:45:04.599633       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:45:04.599762       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:45:04.600051       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:45:04.600060       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:45:04.600379       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:45:04.600388       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:45:04.600498       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:45:04.600602       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:45:04.602384       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:45:04.606100       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:45:04.606193       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:45:04.607172       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:04.608300       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:45:04.610550       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:45:04.613812       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:45:04.616068       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:45:04.628510       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4aa18c92f3f578f172c0e283a0c69d67753703f1ad1da5f13d492a4f417e49f1] <==
	I1123 08:45:03.142410       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:45:03.226982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:45:03.327668       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:45:03.327714       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 08:45:03.327881       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:45:03.350055       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:45:03.350112       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:45:03.355895       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:45:03.356336       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:45:03.356354       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:03.358302       1 config.go:309] "Starting node config controller"
	I1123 08:45:03.358995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:45:03.359014       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:45:03.358344       1 config.go:200] "Starting service config controller"
	I1123 08:45:03.359027       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:45:03.358317       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:45:03.359068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:45:03.359273       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:45:03.359360       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:45:03.459512       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:45:03.459555       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:45:03.459581       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f9c1a46853ec5ff3a03c27f07d016527c9affe0091ecc22c9627ff73f8705db1] <==
	I1123 08:44:59.680281       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:45:01.329629       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:45:01.329742       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:01.339044       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:45:01.339483       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:45:01.339356       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:45:01.339547       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:45:01.339405       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:01.339833       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:01.339860       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:45:01.339843       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:45:01.441298       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:01.442351       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:45:01.442483       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 23 08:45:05 no-preload-187607 kubelet[706]: I1123 08:45:05.983459     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4ac49e3d-7eab-45e2-ab84-ef54283f4bfd-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-hcb2b\" (UID: \"4ac49e3d-7eab-45e2-ab84-ef54283f4bfd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b"
	Nov 23 08:45:05 no-preload-187607 kubelet[706]: I1123 08:45:05.983514     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjrxl\" (UniqueName: \"kubernetes.io/projected/4ac49e3d-7eab-45e2-ab84-ef54283f4bfd-kube-api-access-pjrxl\") pod \"dashboard-metrics-scraper-6ffb444bf9-hcb2b\" (UID: \"4ac49e3d-7eab-45e2-ab84-ef54283f4bfd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b"
	Nov 23 08:45:10 no-preload-187607 kubelet[706]: I1123 08:45:10.228926     706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 08:45:12 no-preload-187607 kubelet[706]: I1123 08:45:12.211159     706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c25qj" podStartSLOduration=2.7542132070000003 podStartE2EDuration="7.211135721s" podCreationTimestamp="2025-11-23 08:45:05 +0000 UTC" firstStartedPulling="2025-11-23 08:45:06.274357182 +0000 UTC m=+8.179108113" lastFinishedPulling="2025-11-23 08:45:10.731279678 +0000 UTC m=+12.636030627" observedRunningTime="2025-11-23 08:45:11.363261352 +0000 UTC m=+13.268012304" watchObservedRunningTime="2025-11-23 08:45:12.211135721 +0000 UTC m=+14.115886672"
	Nov 23 08:45:14 no-preload-187607 kubelet[706]: I1123 08:45:14.349964     706 scope.go:117] "RemoveContainer" containerID="a978dc4166a0721e1ca3efca1a9e1e1fbeacd5219eeb0819615d697554d3b861"
	Nov 23 08:45:15 no-preload-187607 kubelet[706]: I1123 08:45:15.354531     706 scope.go:117] "RemoveContainer" containerID="a978dc4166a0721e1ca3efca1a9e1e1fbeacd5219eeb0819615d697554d3b861"
	Nov 23 08:45:15 no-preload-187607 kubelet[706]: I1123 08:45:15.354752     706 scope.go:117] "RemoveContainer" containerID="8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1"
	Nov 23 08:45:15 no-preload-187607 kubelet[706]: E1123 08:45:15.354948     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:16 no-preload-187607 kubelet[706]: I1123 08:45:16.359638     706 scope.go:117] "RemoveContainer" containerID="8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1"
	Nov 23 08:45:16 no-preload-187607 kubelet[706]: E1123 08:45:16.359837     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:17 no-preload-187607 kubelet[706]: I1123 08:45:17.362737     706 scope.go:117] "RemoveContainer" containerID="8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1"
	Nov 23 08:45:17 no-preload-187607 kubelet[706]: E1123 08:45:17.362977     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:29 no-preload-187607 kubelet[706]: I1123 08:45:29.265851     706 scope.go:117] "RemoveContainer" containerID="8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1"
	Nov 23 08:45:29 no-preload-187607 kubelet[706]: I1123 08:45:29.394289     706 scope.go:117] "RemoveContainer" containerID="8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1"
	Nov 23 08:45:29 no-preload-187607 kubelet[706]: I1123 08:45:29.394514     706 scope.go:117] "RemoveContainer" containerID="9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f"
	Nov 23 08:45:29 no-preload-187607 kubelet[706]: E1123 08:45:29.394730     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:33 no-preload-187607 kubelet[706]: I1123 08:45:33.405868     706 scope.go:117] "RemoveContainer" containerID="3c52daba0a02a5f43db9a936c7bee455eaed07b8846c57f2a36e9d42a2c662b1"
	Nov 23 08:45:36 no-preload-187607 kubelet[706]: I1123 08:45:36.349258     706 scope.go:117] "RemoveContainer" containerID="9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f"
	Nov 23 08:45:36 no-preload-187607 kubelet[706]: E1123 08:45:36.349410     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:49 no-preload-187607 kubelet[706]: I1123 08:45:49.265999     706 scope.go:117] "RemoveContainer" containerID="9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f"
	Nov 23 08:45:49 no-preload-187607 kubelet[706]: E1123 08:45:49.266204     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:54 no-preload-187607 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:45:54 no-preload-187607 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:45:54 no-preload-187607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 08:45:54 no-preload-187607 systemd[1]: kubelet.service: Consumed 1.662s CPU time.
	
	
	==> kubernetes-dashboard [f6600a361a3baa6724f669b340ef4e64b2062295514dc30b0ed6e119477cc6b2] <==
	2025/11/23 08:45:10 Starting overwatch
	2025/11/23 08:45:10 Using namespace: kubernetes-dashboard
	2025/11/23 08:45:10 Using in-cluster config to connect to apiserver
	2025/11/23 08:45:10 Using secret token for csrf signing
	2025/11/23 08:45:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:45:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:45:10 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:45:10 Generating JWE encryption key
	2025/11/23 08:45:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:45:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:45:11 Initializing JWE encryption key from synchronized object
	2025/11/23 08:45:11 Creating in-cluster Sidecar client
	2025/11/23 08:45:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:45:11 Serving insecurely on HTTP port: 9090
	2025/11/23 08:45:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3c52daba0a02a5f43db9a936c7bee455eaed07b8846c57f2a36e9d42a2c662b1] <==
	I1123 08:45:03.110936       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:45:33.113117       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c6c270dccd32c502da3fafcf547f6f6714b0f3418167733e063f8a10411f3214] <==
	I1123 08:45:33.458417       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:45:33.466795       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:45:33.466834       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:33.468546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:36.923483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:41.183840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:44.782557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:47.836611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:50.858421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:50.863562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:50.863731       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:50.863845       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e82eb46b-b542-473b-9efe-cdbb2e96ba53", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-187607_5b79c719-1697-415d-b552-77186768d008 became leader
	I1123 08:45:50.863885       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-187607_5b79c719-1697-415d-b552-77186768d008!
	W1123 08:45:50.865507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:50.868766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:50.964065       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-187607_5b79c719-1697-415d-b552-77186768d008!
	W1123 08:45:52.872591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:52.884973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:54.889144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:54.893418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-187607 -n no-preload-187607
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-187607 -n no-preload-187607: exit status 2 (332.80381ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-187607 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-187607
helpers_test.go:243: (dbg) docker inspect no-preload-187607:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469",
	        "Created": "2025-11-23T08:43:30.899099908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324268,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:44:51.556025805Z",
	            "FinishedAt": "2025-11-23T08:44:50.55082253Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/hostname",
	        "HostsPath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/hosts",
	        "LogPath": "/var/lib/docker/containers/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469/c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469-json.log",
	        "Name": "/no-preload-187607",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-187607:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-187607",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c79339fc6cb18ebebbf555dd5c87208c8109dc964619b0c477edc09752bc3469",
	                "LowerDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4bfb88cdf45732b2f8ac12ad1bc51f8c30050a553114b9b4320468c46469d96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-187607",
	                "Source": "/var/lib/docker/volumes/no-preload-187607/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-187607",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-187607",
	                "name.minikube.sigs.k8s.io": "no-preload-187607",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "775f86894233b2b6953d6ad591546cb31e3c92d7471bc4799925a827744a3864",
	            "SandboxKey": "/var/run/docker/netns/775f86894233",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-187607": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e4a86ee726dad104f8707d936e5a79c6311cee3cba1074fc9a2490264915ec02",
	                    "EndpointID": "83264f55f1d42ad22f0f00e032ea609dea0e50cfe3c86a0890d9338bd54ea909",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ca:37:fa:5a:42:64",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-187607",
	                        "c79339fc6cb1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-187607 -n no-preload-187607
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-187607 -n no-preload-187607: exit status 2 (335.722212ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-187607 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-187607 logs -n 25: (1.102425892s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p default-k8s-diff-port-726261 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p no-preload-187607 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p newest-cni-653361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-726261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ addons  │ enable dashboard -p no-preload-187607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ newest-cni-653361 image list --format=json                                                                                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ pause   │ -p newest-cni-653361 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-177890                                                                                                                                                                                                               │ disable-driver-mounts-177890 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-057894 image list --format=json                                                                                                                                                                                               │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-057894 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ delete  │ -p old-k8s-version-057894                                                                                                                                                                                                                     │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-057894                                                                                                                                                                                                                     │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ default-k8s-diff-port-726261 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p default-k8s-diff-port-726261 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ image   │ no-preload-187607 image list --format=json                                                                                                                                                                                                    │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p no-preload-187607 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-756339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ stop    │ -p embed-certs-756339 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:45:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:45:01.745123  329090 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:01.745432  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745440  329090 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:01.745446  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745739  329090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:45:01.746375  329090 out.go:368] Setting JSON to false
	I1123 08:45:01.748064  329090 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5249,"bootTime":1763882253,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:45:01.748157  329090 start.go:143] virtualization: kvm guest
	I1123 08:45:01.750156  329090 out.go:179] * [embed-certs-756339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:45:01.753393  329090 notify.go:221] Checking for updates...
	I1123 08:45:01.753398  329090 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:45:01.755146  329090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:45:01.756598  329090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:01.757836  329090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:45:01.758954  329090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:45:01.760360  329090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:45:01.765276  329090 config.go:182] Loaded profile config "default-k8s-diff-port-726261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765522  329090 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765681  329090 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:45:01.765827  329090 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:45:01.800644  329090 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:45:01.801313  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.871017  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.860213573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.871190  329090 docker.go:319] overlay module found
	I1123 08:45:01.872879  329090 out.go:179] * Using the docker driver based on user configuration
	I1123 08:45:01.874146  329090 start.go:309] selected driver: docker
	I1123 08:45:01.874172  329090 start.go:927] validating driver "docker" against <nil>
	I1123 08:45:01.874185  329090 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:45:01.874731  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.950283  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.938442114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.950526  329090 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:45:01.950805  329090 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.952251  329090 out.go:179] * Using Docker driver with root privileges
	I1123 08:45:01.953421  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:01.953493  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:01.953508  329090 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:45:01.953584  329090 start.go:353] cluster config:
	{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:01.954827  329090 out.go:179] * Starting "embed-certs-756339" primary control-plane node in "embed-certs-756339" cluster
	I1123 08:45:01.955848  329090 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:45:01.957107  329090 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:45:01.958365  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:01.958393  329090 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:45:01.958408  329090 cache.go:65] Caching tarball of preloaded images
	I1123 08:45:01.958465  329090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:45:01.958507  329090 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:45:01.958523  329090 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:45:01.958635  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:01.958661  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json: {Name:mk2bf238bbe57398e8f0e67e0ff345b4c996e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:01.983475  329090 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:45:01.983497  329090 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:45:01.983513  329090 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:45:01.983540  329090 start.go:360] acquireMachinesLock for embed-certs-756339: {Name:mk2607c5ea38ca6bd330e0a548b36202f67f84a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:01.983642  329090 start.go:364] duration metric: took 84.653µs to acquireMachinesLock for "embed-certs-756339"
	I1123 08:45:01.983672  329090 start.go:93] Provisioning new machine with config: &{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:01.983792  329090 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:45:01.986901  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.692445857s)
	I1123 08:45:01.987002  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.670756175s)
	I1123 08:45:01.987136  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.507320621s)
	I1123 08:45:01.987186  323816 api_server.go:72] duration metric: took 2.902108336s to wait for apiserver process to appear ...
	I1123 08:45:01.987204  323816 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:01.987282  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:01.988808  323816 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-187607 addons enable metrics-server
	
	I1123 08:45:01.992707  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:45:01.992732  323816 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:45:01.994529  323816 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 08:45:01.422757  323135 addons.go:530] duration metric: took 3.555416147s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:01.910007  323135 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:45:01.915784  323135 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:45:01.917062  323135 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:01.917089  323135 api_server.go:131] duration metric: took 507.92158ms to wait for apiserver health ...
	I1123 08:45:01.917100  323135 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:01.921785  323135 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:01.921998  323135 system_pods.go:61] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.922039  323135 system_pods.go:61] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.922068  323135 system_pods.go:61] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.922079  323135 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.922087  323135 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.922095  323135 system_pods.go:61] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.922107  323135 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.922115  323135 system_pods.go:61] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.922124  323135 system_pods.go:74] duration metric: took 5.016936ms to wait for pod list to return data ...
	I1123 08:45:01.922189  323135 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:01.925409  323135 default_sa.go:45] found service account: "default"
	I1123 08:45:01.925452  323135 default_sa.go:55] duration metric: took 3.245595ms for default service account to be created ...
	I1123 08:45:01.925463  323135 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:01.931804  323135 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:01.931872  323135 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.931898  323135 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.931961  323135 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.931995  323135 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.932018  323135 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.932037  323135 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.932066  323135 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.932076  323135 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.932086  323135 system_pods.go:126] duration metric: took 6.61665ms to wait for k8s-apps to be running ...
	I1123 08:45:01.932097  323135 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:01.932143  323135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:01.947263  323135 system_svc.go:56] duration metric: took 15.160659ms WaitForService to wait for kubelet
	I1123 08:45:01.947298  323135 kubeadm.go:587] duration metric: took 4.08017724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.947325  323135 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:01.950481  323135 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:01.950509  323135 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:01.950526  323135 node_conditions.go:105] duration metric: took 3.194245ms to run NodePressure ...
	I1123 08:45:01.950541  323135 start.go:242] waiting for startup goroutines ...
	I1123 08:45:01.950555  323135 start.go:247] waiting for cluster config update ...
	I1123 08:45:01.950571  323135 start.go:256] writing updated cluster config ...
	I1123 08:45:01.950876  323135 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:01.955038  323135 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:01.958449  323135 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:03.965246  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:01.995584  323816 addons.go:530] duration metric: took 2.910424664s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:02.487321  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:02.491678  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:45:02.492738  323816 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:02.492762  323816 api_server.go:131] duration metric: took 505.498506ms to wait for apiserver health ...
	I1123 08:45:02.492770  323816 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:02.496254  323816 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:02.496282  323816 system_pods.go:61] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.496290  323816 system_pods.go:61] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.496296  323816 system_pods.go:61] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.496302  323816 system_pods.go:61] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.496310  323816 system_pods.go:61] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.496317  323816 system_pods.go:61] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.496324  323816 system_pods.go:61] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.496334  323816 system_pods.go:61] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.496340  323816 system_pods.go:74] duration metric: took 3.565076ms to wait for pod list to return data ...
	I1123 08:45:02.496348  323816 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:02.498409  323816 default_sa.go:45] found service account: "default"
	I1123 08:45:02.498426  323816 default_sa.go:55] duration metric: took 2.073405ms for default service account to be created ...
	I1123 08:45:02.498434  323816 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:02.500853  323816 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.500888  323816 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.500899  323816 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.500912  323816 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.500929  323816 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.500941  323816 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.500951  323816 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.500961  323816 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.500971  323816 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.500978  323816 system_pods.go:126] duration metric: took 2.538671ms to wait for k8s-apps to be running ...
	I1123 08:45:02.500991  323816 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:02.501036  323816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:02.522199  323816 system_svc.go:56] duration metric: took 21.201972ms WaitForService to wait for kubelet
	I1123 08:45:02.522225  323816 kubeadm.go:587] duration metric: took 3.437147085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:02.522246  323816 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:02.524870  323816 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:02.524905  323816 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:02.524925  323816 node_conditions.go:105] duration metric: took 2.673388ms to run NodePressure ...
	I1123 08:45:02.524943  323816 start.go:242] waiting for startup goroutines ...
	I1123 08:45:02.524953  323816 start.go:247] waiting for cluster config update ...
	I1123 08:45:02.524970  323816 start.go:256] writing updated cluster config ...
	I1123 08:45:02.525241  323816 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:02.529440  323816 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:02.532956  323816 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:04.545550  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:01.985817  329090 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:45:01.986054  329090 start.go:159] libmachine.API.Create for "embed-certs-756339" (driver="docker")
	I1123 08:45:01.986094  329090 client.go:173] LocalClient.Create starting
	I1123 08:45:01.986158  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 08:45:01.986202  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986228  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986299  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 08:45:01.986331  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986349  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986747  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:45:02.006351  329090 cli_runner.go:211] docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:45:02.006428  329090 network_create.go:284] running [docker network inspect embed-certs-756339] to gather additional debugging logs...
	I1123 08:45:02.006453  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339
	W1123 08:45:02.024029  329090 cli_runner.go:211] docker network inspect embed-certs-756339 returned with exit code 1
	I1123 08:45:02.024056  329090 network_create.go:287] error running [docker network inspect embed-certs-756339]: docker network inspect embed-certs-756339: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-756339 not found
	I1123 08:45:02.024076  329090 network_create.go:289] output of [docker network inspect embed-certs-756339]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-756339 not found
	
	** /stderr **
	I1123 08:45:02.024188  329090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:02.041589  329090 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0e05b954e81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:02:f0:06:d5:34} reservation:<nil>}
	I1123 08:45:02.042147  329090 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2604e536ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:ab:00:4e:41:e6} reservation:<nil>}
	I1123 08:45:02.042884  329090 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce97320dd675 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:5a:a5:0b:c0:b0} reservation:<nil>}
	I1123 08:45:02.043340  329090 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c80b7bca17a7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:f1:41:59:09:b5} reservation:<nil>}
	I1123 08:45:02.043937  329090 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8e58961f3024 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b6:f0:e4:3c:63:d5} reservation:<nil>}
	I1123 08:45:02.044437  329090 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-e4a86ee726da IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ae:37:bc:fe:9d:3a} reservation:<nil>}
	I1123 08:45:02.045221  329090 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06cd0}
	I1123 08:45:02.045242  329090 network_create.go:124] attempt to create docker network embed-certs-756339 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1123 08:45:02.045287  329090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-756339 embed-certs-756339
	I1123 08:45:02.095267  329090 network_create.go:108] docker network embed-certs-756339 192.168.103.0/24 created
	I1123 08:45:02.095296  329090 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-756339" container
	I1123 08:45:02.095350  329090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:45:02.111533  329090 cli_runner.go:164] Run: docker volume create embed-certs-756339 --label name.minikube.sigs.k8s.io=embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:45:02.128824  329090 oci.go:103] Successfully created a docker volume embed-certs-756339
	I1123 08:45:02.128896  329090 cli_runner.go:164] Run: docker run --rm --name embed-certs-756339-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --entrypoint /usr/bin/test -v embed-certs-756339:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:45:02.559029  329090 oci.go:107] Successfully prepared a docker volume embed-certs-756339
	I1123 08:45:02.559098  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:02.559108  329090 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:45:02.559163  329090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 08:45:06.464312  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:08.466215  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:06.707246  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:09.040137  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:11.046122  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:07.131448  329090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.572224972s)
	I1123 08:45:07.131484  329090 kic.go:203] duration metric: took 4.572370498s to extract preloaded images to volume ...
	W1123 08:45:07.131573  329090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:45:07.131616  329090 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:45:07.131860  329090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:45:07.219659  329090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-756339 --name embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-756339 --network embed-certs-756339 --ip 192.168.103.2 --volume embed-certs-756339:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:45:07.635482  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Running}}
	I1123 08:45:07.658965  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.681327  329090 cli_runner.go:164] Run: docker exec embed-certs-756339 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:45:07.737769  329090 oci.go:144] the created container "embed-certs-756339" has a running status.
	I1123 08:45:07.737802  329090 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa...
	I1123 08:45:07.895228  329090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:45:07.935222  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.958382  329090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:45:07.958405  329090 kic_runner.go:114] Args: [docker exec --privileged embed-certs-756339 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:45:08.015520  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:08.039803  329090 machine.go:94] provisionDockerMachine start ...
	I1123 08:45:08.039898  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:08.064345  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:08.064680  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:08.064723  329090 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:45:08.065347  329090 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47890->127.0.0.1:33131: read: connection reset by peer
	I1123 08:45:11.244730  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.244755  329090 ubuntu.go:182] provisioning hostname "embed-certs-756339"
	I1123 08:45:11.244812  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.273763  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.274055  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.274072  329090 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-756339 && echo "embed-certs-756339" | sudo tee /etc/hostname
	I1123 08:45:11.457570  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.457714  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.488146  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.488457  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.488485  329090 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-756339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-756339/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-756339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:45:11.660198  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:45:11.660362  329090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:45:11.660453  329090 ubuntu.go:190] setting up certificates
	I1123 08:45:11.660471  329090 provision.go:84] configureAuth start
	I1123 08:45:11.661011  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:11.684982  329090 provision.go:143] copyHostCerts
	I1123 08:45:11.685043  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:45:11.685053  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:45:11.685140  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:45:11.685249  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:45:11.685255  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:45:11.685292  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:45:11.685383  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:45:11.685391  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:45:11.685427  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:45:11.685506  329090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.embed-certs-756339 san=[127.0.0.1 192.168.103.2 embed-certs-756339 localhost minikube]
	I1123 08:45:11.758697  329090 provision.go:177] copyRemoteCerts
	I1123 08:45:11.758777  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:45:11.758833  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.787179  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:11.905965  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:45:11.934744  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:45:11.961707  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:45:11.985963  329090 provision.go:87] duration metric: took 325.479379ms to configureAuth
	I1123 08:45:11.985992  329090 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:45:11.986220  329090 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:11.986358  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.011499  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:12.011833  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:12.011872  329090 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:45:12.373361  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:45:12.373388  329090 machine.go:97] duration metric: took 4.333562614s to provisionDockerMachine
	I1123 08:45:12.373402  329090 client.go:176] duration metric: took 10.387301049s to LocalClient.Create
	I1123 08:45:12.373431  329090 start.go:167] duration metric: took 10.387376613s to libmachine.API.Create "embed-certs-756339"
	I1123 08:45:12.373444  329090 start.go:293] postStartSetup for "embed-certs-756339" (driver="docker")
	I1123 08:45:12.373458  329090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:45:12.373521  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:45:12.373575  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.394472  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.505303  329090 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:45:12.509881  329090 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:45:12.509946  329090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:45:12.509962  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:45:12.510025  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:45:12.510127  329090 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:45:12.510256  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:45:12.520339  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:45:12.547586  329090 start.go:296] duration metric: took 174.127267ms for postStartSetup
	I1123 08:45:12.548040  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.572325  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:12.572597  329090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:45:12.572652  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.595241  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.708576  329090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:45:12.713786  329090 start.go:128] duration metric: took 10.729979645s to createHost
	I1123 08:45:12.713812  329090 start.go:83] releasing machines lock for "embed-certs-756339", held for 10.730153164s
	I1123 08:45:12.713888  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.744434  329090 ssh_runner.go:195] Run: cat /version.json
	I1123 08:45:12.744496  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.744678  329090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:45:12.744776  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.771659  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.771722  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.970377  329090 ssh_runner.go:195] Run: systemctl --version
	I1123 08:45:12.980003  329090 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:45:13.031076  329090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:45:13.037986  329090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:45:13.038091  329090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:45:13.078655  329090 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:45:13.078678  329090 start.go:496] detecting cgroup driver to use...
	I1123 08:45:13.078778  329090 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:45:13.078826  329090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:45:13.102501  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:45:13.121011  329090 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:45:13.121088  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:45:13.144025  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:45:13.166610  329090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:45:13.266885  329090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:45:13.383738  329090 docker.go:234] disabling docker service ...
	I1123 08:45:13.383808  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:45:13.408902  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:45:13.425055  329090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:45:13.533375  329090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:45:13.641970  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:45:13.655349  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:45:13.672802  329090 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:45:13.672859  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.682619  329090 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:45:13.682671  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.691340  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.700633  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.709880  329090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:45:13.717844  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.726872  329090 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.741035  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.750011  329090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:45:13.757738  329090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:45:13.764834  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:13.846176  329090 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:45:15.041719  329090 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.195506975s)
	I1123 08:45:15.041743  329090 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:45:15.041806  329090 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:45:15.046071  329090 start.go:564] Will wait 60s for crictl version
	I1123 08:45:15.046136  329090 ssh_runner.go:195] Run: which crictl
	I1123 08:45:15.049573  329090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:45:15.078843  329090 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:45:15.078920  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.108962  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.139712  329090 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 08:45:10.968346  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.466785  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.540283  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:16.038123  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:15.141197  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:15.159501  329090 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 08:45:15.163431  329090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:15.173476  329090 kubeadm.go:884] updating cluster {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:45:15.173575  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:15.173616  329090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:15.210172  329090 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:45:15.210193  329090 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:45:15.210244  329090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:15.237085  329090 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:45:15.237104  329090 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:45:15.237113  329090 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1123 08:45:15.237217  329090 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-756339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:45:15.237295  329090 ssh_runner.go:195] Run: crio config
	I1123 08:45:15.283601  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:15.283625  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:15.283643  329090 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:45:15.283669  329090 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-756339 NodeName:embed-certs-756339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:45:15.283837  329090 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-756339"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:45:15.283904  329090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:45:15.292504  329090 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:45:15.292566  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:45:15.300378  329090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 08:45:15.312974  329090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:45:15.327882  329090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1123 08:45:15.340181  329090 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:45:15.343646  329090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:15.354110  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:15.443097  329090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:15.467751  329090 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339 for IP: 192.168.103.2
	I1123 08:45:15.467775  329090 certs.go:195] generating shared ca certs ...
	I1123 08:45:15.467794  329090 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.467944  329090 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:45:15.468013  329090 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:45:15.468026  329090 certs.go:257] generating profile certs ...
	I1123 08:45:15.468092  329090 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key
	I1123 08:45:15.468108  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt with IP's: []
	I1123 08:45:15.681556  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt ...
	I1123 08:45:15.681578  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt: {Name:mk22797cd88ef1f778f787e25af3588a79d11855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.681755  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key ...
	I1123 08:45:15.681771  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key: {Name:mk2507e79a5f05fa7cb11db2054cd014292902df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.681880  329090 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354
	I1123 08:45:15.681896  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1123 08:45:15.727484  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 ...
	I1123 08:45:15.727506  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354: {Name:mkade0e3ba918afced6504828d64527edcb7e06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.727677  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354 ...
	I1123 08:45:15.727718  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354: {Name:mke39adf49845e1231f060e2780420238d4a87bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.727834  329090 certs.go:382] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt
	I1123 08:45:15.727927  329090 certs.go:386] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key
	I1123 08:45:15.728008  329090 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key
	I1123 08:45:15.728025  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt with IP's: []
	I1123 08:45:15.834669  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt ...
	I1123 08:45:15.834720  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt: {Name:mkad5e6304235e6d8f0ebd086b0ccf458022d6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.834861  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key ...
	I1123 08:45:15.834879  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key: {Name:mka603d9600779233619dbc354e88b03aa5d1f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.835045  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:45:15.835081  329090 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:45:15.835092  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:45:15.835118  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:45:15.835142  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:45:15.835178  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:45:15.835218  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:45:15.835729  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:45:15.855139  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:45:15.873868  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:45:15.894547  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:45:15.912933  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:45:15.930981  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:45:15.949401  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:45:15.970429  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:45:15.989205  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:45:16.008793  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:45:16.025737  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:45:16.043175  329090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:45:16.055931  329090 ssh_runner.go:195] Run: openssl version
	I1123 08:45:16.061639  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:45:16.069652  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.073176  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.073220  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.108921  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:45:16.116885  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:45:16.124882  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.128591  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.128656  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.185316  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:45:16.195245  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:45:16.206667  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.211327  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.211374  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.251180  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:45:16.260175  329090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:45:16.264022  329090 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:45:16.264083  329090 kubeadm.go:401] StartCluster: {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:16.264171  329090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:45:16.264218  329090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:45:16.292235  329090 cri.go:89] found id: ""
	I1123 08:45:16.292292  329090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:45:16.300794  329090 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:45:16.308741  329090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:45:16.308794  329090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:45:16.316404  329090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:45:16.316422  329090 kubeadm.go:158] found existing configuration files:
	
	I1123 08:45:16.316458  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:45:16.324309  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:45:16.324349  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:45:16.332260  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:45:16.340786  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:45:16.340842  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:45:16.348658  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:45:16.358536  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:45:16.358583  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:45:16.368595  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:45:16.377891  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:45:16.377952  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:45:16.386029  329090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:45:16.424131  329090 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:45:16.424226  329090 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:45:16.444456  329090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:45:16.444527  329090 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:45:16.444572  329090 kubeadm.go:319] OS: Linux
	I1123 08:45:16.444654  329090 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:45:16.444763  329090 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:45:16.444824  329090 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:45:16.444916  329090 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:45:16.444986  329090 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:45:16.445059  329090 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:45:16.445128  329090 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:45:16.445197  329090 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:45:16.502432  329090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:45:16.502566  329090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:45:16.502717  329090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:45:16.512573  329090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:45:16.514857  329090 out.go:252]   - Generating certificates and keys ...
	I1123 08:45:16.514990  329090 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:45:16.515094  329090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:45:16.608081  329090 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:45:16.680528  329090 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:45:16.801156  329090 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:45:17.144723  329090 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:45:17.391838  329090 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:45:17.392042  329090 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-756339 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:45:17.447222  329090 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:45:17.447383  329090 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-756339 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:45:17.644625  329090 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:45:17.916674  329090 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:45:18.538498  329090 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:45:18.538728  329090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:45:18.967277  329090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:45:19.377546  329090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:45:19.559622  329090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:45:20.075738  329090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:45:20.364836  329090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:45:20.365389  329090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:45:20.380029  329090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1123 08:45:15.964678  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:18.463898  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:18.038557  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:20.040142  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:20.381602  329090 out.go:252]   - Booting up control plane ...
	I1123 08:45:20.381763  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:45:20.381900  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:45:20.382610  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:45:20.395865  329090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:45:20.396015  329090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:45:20.402081  329090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:45:20.402378  329090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:45:20.402436  329090 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:45:20.508331  329090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:45:20.508495  329090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:45:22.009994  329090 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501781773s
	I1123 08:45:22.014389  329090 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:45:22.014519  329090 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1123 08:45:22.014637  329090 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:45:22.014773  329090 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:45:23.091748  329090 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.077310791s
	I1123 08:45:23.589008  329090 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.574535055s
	I1123 08:45:25.015461  329090 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001048624s
	I1123 08:45:25.026445  329090 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:45:25.036344  329090 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:45:25.045136  329090 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:45:25.045341  329090 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-756339 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:45:25.052213  329090 kubeadm.go:319] [bootstrap-token] Using token: jh7osp.28agjpkabxiw65fh
	W1123 08:45:20.963406  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:22.964352  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:22.538516  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:24.539132  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:25.055029  329090 out.go:252]   - Configuring RBAC rules ...
	I1123 08:45:25.055175  329090 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:45:25.058117  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:45:25.062975  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:45:25.066360  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:45:25.069196  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:45:25.071492  329090 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:45:25.419913  329090 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:45:25.836463  329090 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:45:26.420358  329090 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:45:26.421135  329090 kubeadm.go:319] 
	I1123 08:45:26.421252  329090 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:45:26.421277  329090 kubeadm.go:319] 
	I1123 08:45:26.421378  329090 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:45:26.421390  329090 kubeadm.go:319] 
	I1123 08:45:26.421426  329090 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:45:26.421521  329090 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:45:26.421603  329090 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:45:26.421620  329090 kubeadm.go:319] 
	I1123 08:45:26.421735  329090 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:45:26.421746  329090 kubeadm.go:319] 
	I1123 08:45:26.421806  329090 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:45:26.421815  329090 kubeadm.go:319] 
	I1123 08:45:26.421881  329090 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:45:26.421994  329090 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:45:26.422098  329090 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:45:26.422107  329090 kubeadm.go:319] 
	I1123 08:45:26.422206  329090 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:45:26.422316  329090 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:45:26.422325  329090 kubeadm.go:319] 
	I1123 08:45:26.422429  329090 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jh7osp.28agjpkabxiw65fh \
	I1123 08:45:26.422527  329090 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c \
	I1123 08:45:26.422562  329090 kubeadm.go:319] 	--control-plane 
	I1123 08:45:26.422571  329090 kubeadm.go:319] 
	I1123 08:45:26.422711  329090 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:45:26.422722  329090 kubeadm.go:319] 
	I1123 08:45:26.422841  329090 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jh7osp.28agjpkabxiw65fh \
	I1123 08:45:26.422947  329090 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c 
	I1123 08:45:26.425509  329090 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:45:26.425638  329090 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:45:26.425665  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:26.425679  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:26.427041  329090 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:45:26.427891  329090 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:45:26.432307  329090 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:45:26.432326  329090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:45:26.445364  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:45:26.642490  329090 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:45:26.642551  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:26.642592  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-756339 minikube.k8s.io/updated_at=2025_11_23T08_45_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=embed-certs-756339 minikube.k8s.io/primary=true
	I1123 08:45:26.729263  329090 ops.go:34] apiserver oom_adj: -16
	I1123 08:45:26.729393  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 08:45:25.464467  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:27.964097  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:26.539240  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:29.038507  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:27.229843  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:27.730298  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:28.230009  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:28.730490  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:29.229984  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:29.730299  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:30.229522  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:30.729582  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:31.230290  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:31.293892  329090 kubeadm.go:1114] duration metric: took 4.651396638s to wait for elevateKubeSystemPrivileges
	I1123 08:45:31.293931  329090 kubeadm.go:403] duration metric: took 15.029851328s to StartCluster
	I1123 08:45:31.293953  329090 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:31.294038  329090 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:31.295585  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:31.295872  329090 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:31.295936  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:45:31.296007  329090 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:45:31.296114  329090 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-756339"
	I1123 08:45:31.296118  329090 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:31.296134  329090 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-756339"
	I1123 08:45:31.296128  329090 addons.go:70] Setting default-storageclass=true in profile "embed-certs-756339"
	I1123 08:45:31.296166  329090 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-756339"
	I1123 08:45:31.296176  329090 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:45:31.296604  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.296720  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.297232  329090 out.go:179] * Verifying Kubernetes components...
	I1123 08:45:31.299135  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:31.322679  329090 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:45:31.324511  329090 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:31.324536  329090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:45:31.324593  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:31.329451  329090 addons.go:239] Setting addon default-storageclass=true in "embed-certs-756339"
	I1123 08:45:31.329500  329090 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:45:31.330018  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.359473  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:31.359508  329090 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:31.359523  329090 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:45:31.359576  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:31.383150  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:31.400104  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:45:31.438850  329090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:31.477184  329090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:31.500079  329090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:31.590832  329090 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:45:31.592356  329090 node_ready.go:35] waiting up to 6m0s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:45:31.806094  329090 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 08:45:30.466331  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:32.963158  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:34.963993  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:31.541665  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:34.038345  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:31.807238  329090 addons.go:530] duration metric: took 511.238501ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:45:32.094332  329090 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-756339" context rescaled to 1 replicas
	W1123 08:45:33.595476  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:36.094914  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:37.463401  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:39.463744  323135 pod_ready.go:94] pod "coredns-66bc5c9577-8f8f5" is "Ready"
	I1123 08:45:39.463771  323135 pod_ready.go:86] duration metric: took 37.505301624s for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.466073  323135 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.469881  323135 pod_ready.go:94] pod "etcd-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.469907  323135 pod_ready.go:86] duration metric: took 3.813451ms for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.471783  323135 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.475591  323135 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.475615  323135 pod_ready.go:86] duration metric: took 3.808626ms for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.477543  323135 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.662072  323135 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.662095  323135 pod_ready.go:86] duration metric: took 184.532328ms for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.861972  323135 pod_ready.go:83] waiting for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.262090  323135 pod_ready.go:94] pod "kube-proxy-sn4sp" is "Ready"
	I1123 08:45:40.262116  323135 pod_ready.go:86] duration metric: took 400.120277ms for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.462054  323135 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.862186  323135 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:40.862212  323135 pod_ready.go:86] duration metric: took 400.136767ms for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.862222  323135 pod_ready.go:40] duration metric: took 38.907156113s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:40.906296  323135 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:40.908135  323135 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-726261" cluster and "default" namespace by default
	W1123 08:45:36.537535  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:38.537920  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:40.537903  323816 pod_ready.go:94] pod "coredns-66bc5c9577-khlrk" is "Ready"
	I1123 08:45:40.537927  323816 pod_ready.go:86] duration metric: took 38.004948026s for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.540197  323816 pod_ready.go:83] waiting for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.543594  323816 pod_ready.go:94] pod "etcd-no-preload-187607" is "Ready"
	I1123 08:45:40.543613  323816 pod_ready.go:86] duration metric: took 3.39504ms for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.545430  323816 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.548523  323816 pod_ready.go:94] pod "kube-apiserver-no-preload-187607" is "Ready"
	I1123 08:45:40.548540  323816 pod_ready.go:86] duration metric: took 3.086438ms for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.550144  323816 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.736784  323816 pod_ready.go:94] pod "kube-controller-manager-no-preload-187607" is "Ready"
	I1123 08:45:40.736810  323816 pod_ready.go:86] duration metric: took 186.650289ms for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.936965  323816 pod_ready.go:83] waiting for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:38.095893  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:40.595721  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	I1123 08:45:41.336483  323816 pod_ready.go:94] pod "kube-proxy-f9d8j" is "Ready"
	I1123 08:45:41.336508  323816 pod_ready.go:86] duration metric: took 399.518187ms for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.536451  323816 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.936068  323816 pod_ready.go:94] pod "kube-scheduler-no-preload-187607" is "Ready"
	I1123 08:45:41.936095  323816 pod_ready.go:86] duration metric: took 399.617585ms for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.936110  323816 pod_ready.go:40] duration metric: took 39.406642608s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:41.977753  323816 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:41.979147  323816 out.go:179] * Done! kubectl is now configured to use "no-preload-187607" cluster and "default" namespace by default
	I1123 08:45:43.095643  329090 node_ready.go:49] node "embed-certs-756339" is "Ready"
	I1123 08:45:43.095676  329090 node_ready.go:38] duration metric: took 11.503297149s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:45:43.095722  329090 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:45:43.095787  329090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:45:43.107848  329090 api_server.go:72] duration metric: took 11.811934824s to wait for apiserver process to appear ...
	I1123 08:45:43.107869  329090 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:43.107884  329090 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:45:43.112629  329090 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:45:43.113413  329090 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:43.113433  329090 api_server.go:131] duration metric: took 5.559653ms to wait for apiserver health ...
	I1123 08:45:43.113441  329090 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:43.116485  329090 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:43.116510  329090 system_pods.go:61] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.116515  329090 system_pods.go:61] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.116520  329090 system_pods.go:61] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.116525  329090 system_pods.go:61] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.116532  329090 system_pods.go:61] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.116536  329090 system_pods.go:61] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.116539  329090 system_pods.go:61] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.116545  329090 system_pods.go:61] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.116550  329090 system_pods.go:74] duration metric: took 3.105251ms to wait for pod list to return data ...
	I1123 08:45:43.116558  329090 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:43.118523  329090 default_sa.go:45] found service account: "default"
	I1123 08:45:43.118538  329090 default_sa.go:55] duration metric: took 1.974886ms for default service account to be created ...
	I1123 08:45:43.118545  329090 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:43.120780  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.120802  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.120810  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.120815  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.120819  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.120826  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.120831  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.120834  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.120839  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.120863  329090 retry.go:31] will retry after 215.602357ms: missing components: kube-dns
	I1123 08:45:43.340425  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.340455  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.340462  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.340467  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.340472  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.340477  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.340480  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.340483  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.340488  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.340504  329090 retry.go:31] will retry after 325.287893ms: missing components: kube-dns
	I1123 08:45:43.668913  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.668952  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.668962  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.668971  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.668977  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.668983  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.668987  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.668993  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.669002  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.669025  329090 retry.go:31] will retry after 462.937798ms: missing components: kube-dns
	I1123 08:45:44.135919  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:44.135950  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Running
	I1123 08:45:44.135957  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:44.135962  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:44.135967  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:44.135972  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:44.135977  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:44.135983  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:44.135988  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Running
	I1123 08:45:44.135997  329090 system_pods.go:126] duration metric: took 1.017446384s to wait for k8s-apps to be running ...
	I1123 08:45:44.136008  329090 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:44.136053  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:44.148387  329090 system_svc.go:56] duration metric: took 12.375192ms WaitForService to wait for kubelet
	I1123 08:45:44.148408  329090 kubeadm.go:587] duration metric: took 12.85249816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:44.148426  329090 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:44.150884  329090 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:44.150906  329090 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:44.150923  329090 node_conditions.go:105] duration metric: took 2.493335ms to run NodePressure ...
	I1123 08:45:44.150933  329090 start.go:242] waiting for startup goroutines ...
	I1123 08:45:44.150943  329090 start.go:247] waiting for cluster config update ...
	I1123 08:45:44.150953  329090 start.go:256] writing updated cluster config ...
	I1123 08:45:44.151188  329090 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:44.154964  329090 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:44.158442  329090 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.162122  329090 pod_ready.go:94] pod "coredns-66bc5c9577-ffmn2" is "Ready"
	I1123 08:45:44.162139  329090 pod_ready.go:86] duration metric: took 3.680173ms for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.163781  329090 pod_ready.go:83] waiting for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.167030  329090 pod_ready.go:94] pod "etcd-embed-certs-756339" is "Ready"
	I1123 08:45:44.167046  329090 pod_ready.go:86] duration metric: took 3.249458ms for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.168620  329090 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.171889  329090 pod_ready.go:94] pod "kube-apiserver-embed-certs-756339" is "Ready"
	I1123 08:45:44.171905  329090 pod_ready.go:86] duration metric: took 3.265991ms for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.173681  329090 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.558804  329090 pod_ready.go:94] pod "kube-controller-manager-embed-certs-756339" is "Ready"
	I1123 08:45:44.558838  329090 pod_ready.go:86] duration metric: took 385.124392ms for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.759793  329090 pod_ready.go:83] waiting for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.158864  329090 pod_ready.go:94] pod "kube-proxy-npnsh" is "Ready"
	I1123 08:45:45.158887  329090 pod_ready.go:86] duration metric: took 399.071703ms for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.360200  329090 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.758770  329090 pod_ready.go:94] pod "kube-scheduler-embed-certs-756339" is "Ready"
	I1123 08:45:45.758800  329090 pod_ready.go:86] duration metric: took 398.571969ms for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.758811  329090 pod_ready.go:40] duration metric: took 1.603821403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:45.800049  329090 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:45.802064  329090 out.go:179] * Done! kubectl is now configured to use "embed-certs-756339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:45:14 no-preload-187607 crio[568]: time="2025-11-23T08:45:14.50697807Z" level=info msg="Started container" PID=1700 containerID=8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper id=466b3896-a7bb-4df4-a299-2d9ff390e087 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f252061dd11e58b7aa8da24165aadd581d83651ba9757de92900f6f4f523e628
	Nov 23 08:45:15 no-preload-187607 crio[568]: time="2025-11-23T08:45:15.355999571Z" level=info msg="Removing container: a978dc4166a0721e1ca3efca1a9e1e1fbeacd5219eeb0819615d697554d3b861" id=486c8124-f382-4a0d-9139-94b7b3be9d96 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:15 no-preload-187607 crio[568]: time="2025-11-23T08:45:15.368021143Z" level=info msg="Removed container a978dc4166a0721e1ca3efca1a9e1e1fbeacd5219eeb0819615d697554d3b861: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper" id=486c8124-f382-4a0d-9139-94b7b3be9d96 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.266434756Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=51cbe557-24fb-42da-b92b-d6e701ff3283 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.267396946Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=87ee8c00-0fdd-4ede-9ee1-bfdb4a936481 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.268463762Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper" id=87c84b8b-1c0b-4c53-a6fa-d3bf0a98fe8a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.268584894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.274155112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.274609227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.314154409Z" level=info msg="Created container 9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper" id=87c84b8b-1c0b-4c53-a6fa-d3bf0a98fe8a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.314908714Z" level=info msg="Starting container: 9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f" id=b1cae5ed-3a16-4dcc-b84f-b280bd38f198 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.316517916Z" level=info msg="Started container" PID=1712 containerID=9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper id=b1cae5ed-3a16-4dcc-b84f-b280bd38f198 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f252061dd11e58b7aa8da24165aadd581d83651ba9757de92900f6f4f523e628
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.395618687Z" level=info msg="Removing container: 8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1" id=aa24d49c-8966-49e8-965d-93f38b45d5ae name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:29 no-preload-187607 crio[568]: time="2025-11-23T08:45:29.404390104Z" level=info msg="Removed container 8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b/dashboard-metrics-scraper" id=aa24d49c-8966-49e8-965d-93f38b45d5ae name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.406235352Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=af273b10-4dbe-424f-b344-7753d2d80b7d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.407076382Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c19f2f41-c061-4d0d-9412-6ed71c0f5748 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.408118241Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=00d499b5-edf6-4c7d-9367-374f68a895e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.408242394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.413383581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.41357001Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ff35123aea3a53ef0d6d170a28b288aedbfc75ac397c4854faa2cb30a8f8fd89/merged/etc/passwd: no such file or directory"
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.413595711Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ff35123aea3a53ef0d6d170a28b288aedbfc75ac397c4854faa2cb30a8f8fd89/merged/etc/group: no such file or directory"
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.413889719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.444669211Z" level=info msg="Created container c6c270dccd32c502da3fafcf547f6f6714b0f3418167733e063f8a10411f3214: kube-system/storage-provisioner/storage-provisioner" id=00d499b5-edf6-4c7d-9367-374f68a895e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.445194646Z" level=info msg="Starting container: c6c270dccd32c502da3fafcf547f6f6714b0f3418167733e063f8a10411f3214" id=49bd0dc8-16ae-43cb-b1f8-02b27a04a4b9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:33 no-preload-187607 crio[568]: time="2025-11-23T08:45:33.447037721Z" level=info msg="Started container" PID=1726 containerID=c6c270dccd32c502da3fafcf547f6f6714b0f3418167733e063f8a10411f3214 description=kube-system/storage-provisioner/storage-provisioner id=49bd0dc8-16ae-43cb-b1f8-02b27a04a4b9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=23e97d90cf1f092a96d4535340b59be9917de2f5ef91878c69266fdc45d0b634
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c6c270dccd32c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   23e97d90cf1f0       storage-provisioner                          kube-system
	9a28a511032a2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   f252061dd11e5       dashboard-metrics-scraper-6ffb444bf9-hcb2b   kubernetes-dashboard
	f6600a361a3ba       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   f02426d39a6cb       kubernetes-dashboard-855c9754f9-c25qj        kubernetes-dashboard
	228561f3c58a8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   24905f1565670       busybox                                      default
	0f65ae30d25f0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   63384f1b6547d       coredns-66bc5c9577-khlrk                     kube-system
	c7d79d91171ad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   340cae71ce404       kindnet-67c62                                kube-system
	3c52daba0a02a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   23e97d90cf1f0       storage-provisioner                          kube-system
	4aa18c92f3f57       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   4ab9e21b47c7c       kube-proxy-f9d8j                             kube-system
	82c67fc0d0d50       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   b4360f999ab57       etcd-no-preload-187607                       kube-system
	58bccd8b52572       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   9a4549b6d8756       kube-apiserver-no-preload-187607             kube-system
	f7dc3b2c3eb35       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   de9ba1fc3fa90       kube-controller-manager-no-preload-187607    kube-system
	f9c1a46853ec5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   514d893b96883       kube-scheduler-no-preload-187607             kube-system
	
	
	==> coredns [0f65ae30d25f0e6796dc383f8f723ee3a043d903cea5be75fb9ad29429a39fa0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53876 - 10530 "HINFO IN 8572700105362580089.124782930035999275. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.088248895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-187607
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-187607
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=no-preload-187607
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_44_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:44:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-187607
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:45:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:45:31 +0000   Sun, 23 Nov 2025 08:43:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:45:31 +0000   Sun, 23 Nov 2025 08:43:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:45:31 +0000   Sun, 23 Nov 2025 08:43:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:45:31 +0000   Sun, 23 Nov 2025 08:44:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-187607
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                156073dd-043d-48c6-8d6c-0e5326137d17
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-khlrk                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-187607                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-67c62                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-187607              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-187607     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-f9d8j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-187607              100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-hcb2b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c25qj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node no-preload-187607 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node no-preload-187607 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node no-preload-187607 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node no-preload-187607 event: Registered Node no-preload-187607 in Controller
	  Normal  NodeReady                97s                kubelet          Node no-preload-187607 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node no-preload-187607 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node no-preload-187607 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node no-preload-187607 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node no-preload-187607 event: Registered Node no-preload-187607 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [82c67fc0d0d50ab08e241d39a2087b1c3e8bc3f645f3bfdeeb79a7ab0f98af22] <==
	{"level":"info","ts":"2025-11-23T08:45:05.539225Z","caller":"traceutil/trace.go:172","msg":"trace[1643612090] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"187.246433ms","start":"2025-11-23T08:45:05.351964Z","end":"2025-11-23T08:45:05.539210Z","steps":["trace[1643612090] 'process raft request'  (duration: 187.155382ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:05.539255Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.313852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" limit:1 ","response":"range_response_count:1 size:4430"}
	{"level":"info","ts":"2025-11-23T08:45:05.539215Z","caller":"traceutil/trace.go:172","msg":"trace[2122619234] transaction","detail":"{read_only:false; response_revision:492; number_of_response:1; }","duration":"188.248399ms","start":"2025-11-23T08:45:05.350949Z","end":"2025-11-23T08:45:05.539197Z","steps":["trace[2122619234] 'process raft request'  (duration: 188.13882ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.539295Z","caller":"traceutil/trace.go:172","msg":"trace[99005969] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper; range_end:; response_count:1; response_revision:493; }","duration":"109.360342ms","start":"2025-11-23T08:45:05.429922Z","end":"2025-11-23T08:45:05.539282Z","steps":["trace[99005969] 'agreement among raft nodes before linearized reading'  (duration: 109.238345ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.539335Z","caller":"traceutil/trace.go:172","msg":"trace[523077530] transaction","detail":"{read_only:false; response_revision:491; number_of_response:1; }","duration":"188.40504ms","start":"2025-11-23T08:45:05.350911Z","end":"2025-11-23T08:45:05.539316Z","steps":["trace[523077530] 'process raft request'  (duration: 185.201162ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728096Z","caller":"traceutil/trace.go:172","msg":"trace[205968407] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"183.628429ms","start":"2025-11-23T08:45:05.544448Z","end":"2025-11-23T08:45:05.728077Z","steps":["trace[205968407] 'process raft request'  (duration: 180.295065ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728240Z","caller":"traceutil/trace.go:172","msg":"trace[814500553] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"182.049169ms","start":"2025-11-23T08:45:05.546175Z","end":"2025-11-23T08:45:05.728224Z","steps":["trace[814500553] 'process raft request'  (duration: 182.01285ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728270Z","caller":"traceutil/trace.go:172","msg":"trace[1579071933] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"182.839414ms","start":"2025-11-23T08:45:05.545418Z","end":"2025-11-23T08:45:05.728257Z","steps":["trace[1579071933] 'process raft request'  (duration: 182.643262ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728351Z","caller":"traceutil/trace.go:172","msg":"trace[1017226904] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"183.90296ms","start":"2025-11-23T08:45:05.544434Z","end":"2025-11-23T08:45:05.728337Z","steps":["trace[1017226904] 'process raft request'  (duration: 183.557154ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728355Z","caller":"traceutil/trace.go:172","msg":"trace[1557003027] transaction","detail":"{read_only:false; response_revision:498; number_of_response:1; }","duration":"182.822296ms","start":"2025-11-23T08:45:05.545525Z","end":"2025-11-23T08:45:05.728347Z","steps":["trace[1557003027] 'process raft request'  (duration: 182.583427ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.728450Z","caller":"traceutil/trace.go:172","msg":"trace[1857130239] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"182.672715ms","start":"2025-11-23T08:45:05.545765Z","end":"2025-11-23T08:45:05.728438Z","steps":["trace[1857130239] 'process raft request'  (duration: 182.381249ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.730803Z","caller":"traceutil/trace.go:172","msg":"trace[672235072] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"108.455123ms","start":"2025-11-23T08:45:05.622338Z","end":"2025-11-23T08:45:05.730793Z","steps":["trace[672235072] 'process raft request'  (duration: 108.388894ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.892455Z","caller":"traceutil/trace.go:172","msg":"trace[2018686248] transaction","detail":"{read_only:false; response_revision:502; number_of_response:1; }","duration":"159.424324ms","start":"2025-11-23T08:45:05.733014Z","end":"2025-11-23T08:45:05.892438Z","steps":["trace[2018686248] 'process raft request'  (duration: 129.795449ms)","trace[2018686248] 'compare'  (duration: 29.525603ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:05.906741Z","caller":"traceutil/trace.go:172","msg":"trace[1489864518] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"173.71283ms","start":"2025-11-23T08:45:05.733012Z","end":"2025-11-23T08:45:05.906725Z","steps":["trace[1489864518] 'process raft request'  (duration: 173.551406ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.906769Z","caller":"traceutil/trace.go:172","msg":"trace[1809762921] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"172.246802ms","start":"2025-11-23T08:45:05.734514Z","end":"2025-11-23T08:45:05.906761Z","steps":["trace[1809762921] 'process raft request'  (duration: 172.155005ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.906846Z","caller":"traceutil/trace.go:172","msg":"trace[666099855] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"170.96402ms","start":"2025-11-23T08:45:05.735866Z","end":"2025-11-23T08:45:05.906830Z","steps":["trace[666099855] 'process raft request'  (duration: 170.866633ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:05.906964Z","caller":"traceutil/trace.go:172","msg":"trace[1645553263] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"172.458656ms","start":"2025-11-23T08:45:05.734495Z","end":"2025-11-23T08:45:05.906953Z","steps":["trace[1645553263] 'process raft request'  (duration: 172.130118ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:06.131295Z","caller":"traceutil/trace.go:172","msg":"trace[1244225681] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"218.366799ms","start":"2025-11-23T08:45:05.912907Z","end":"2025-11-23T08:45:06.131273Z","steps":["trace[1244225681] 'process raft request'  (duration: 183.582038ms)","trace[1244225681] 'compare'  (duration: 34.623056ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:06.131391Z","caller":"traceutil/trace.go:172","msg":"trace[1945324553] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"217.240667ms","start":"2025-11-23T08:45:05.914138Z","end":"2025-11-23T08:45:06.131378Z","steps":["trace[1945324553] 'process raft request'  (duration: 217.075776ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:45:06.400508Z","caller":"traceutil/trace.go:172","msg":"trace[1194367375] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"213.784788ms","start":"2025-11-23T08:45:06.186709Z","end":"2025-11-23T08:45:06.400494Z","steps":["trace[1194367375] 'process raft request'  (duration: 207.446676ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T08:45:06.703653Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"169.232216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-khlrk\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-11-23T08:45:06.703735Z","caller":"traceutil/trace.go:172","msg":"trace[906958138] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-khlrk; range_end:; response_count:1; response_revision:514; }","duration":"169.322892ms","start":"2025-11-23T08:45:06.534400Z","end":"2025-11-23T08:45:06.703722Z","steps":["trace[906958138] 'agreement among raft nodes before linearized reading'  (duration: 27.00419ms)","trace[906958138] 'range keys from in-memory index tree'  (duration: 142.143279ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T08:45:06.704111Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.172965ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361486681663 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/no-preload-187607.187a96558e9abf34\" mod_revision:512 > success:<request_put:<key:\"/registry/events/default/no-preload-187607.187a96558e9abf34\" value_size:624 lease:6571766361486681580 >> failure:<request_range:<key:\"/registry/events/default/no-preload-187607.187a96558e9abf34\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T08:45:06.704186Z","caller":"traceutil/trace.go:172","msg":"trace[2086217684] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"274.391641ms","start":"2025-11-23T08:45:06.429782Z","end":"2025-11-23T08:45:06.704174Z","steps":["trace[2086217684] 'process raft request'  (duration: 131.67672ms)","trace[2086217684] 'compare'  (duration: 142.099979ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T08:45:06.989318Z","caller":"traceutil/trace.go:172","msg":"trace[939907356] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"166.727737ms","start":"2025-11-23T08:45:06.822571Z","end":"2025-11-23T08:45:06.989299Z","steps":["trace[939907356] 'process raft request'  (duration: 127.63493ms)","trace[939907356] 'compare'  (duration: 39.005041ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:45:58 up  1:28,  0 user,  load average: 3.69, 3.74, 2.45
	Linux no-preload-187607 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c7d79d91171ad2356ff4429be5853d33c2d0b45d87251302f6d1b783580ef9ee] <==
	I1123 08:45:03.331754       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:45:03.332049       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 08:45:03.332333       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:45:03.332362       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:45:03.332427       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:45:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:45:03.535850       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:45:03.535874       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:45:03.535886       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:45:03.536074       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:45:03.836803       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:45:03.836831       1 metrics.go:72] Registering metrics
	I1123 08:45:03.836879       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:13.535778       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:13.535834       1 main.go:301] handling current node
	I1123 08:45:23.540770       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:23.540818       1 main.go:301] handling current node
	I1123 08:45:33.536218       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:33.536257       1 main.go:301] handling current node
	I1123 08:45:43.537757       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:43.537799       1 main.go:301] handling current node
	I1123 08:45:53.544769       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 08:45:53.544805       1 main.go:301] handling current node
	
	
	==> kube-apiserver [58bccd8b525725bf0e119a031f7704340d4a582f1f9d22e35700e56c5414fc15] <==
	I1123 08:45:01.291031       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:45:01.296909       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 08:45:01.301461       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 08:45:01.301563       1 policy_source.go:240] refreshing policies
	I1123 08:45:01.302385       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 08:45:01.303386       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 08:45:01.305784       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 08:45:01.305877       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:45:01.306740       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:45:01.306760       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:45:01.306769       1 cache.go:39] Caches are synced for autoregister controller
	E1123 08:45:01.324255       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:45:01.337257       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:45:01.337565       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:45:01.357385       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:45:01.714332       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:45:01.748302       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:45:01.774043       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:45:01.785408       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:45:01.836001       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.136.179"}
	I1123 08:45:01.847637       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.66.152"}
	I1123 08:45:02.180656       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:45:04.806042       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:45:05.003356       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:45:05.251887       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f7dc3b2c3eb35a85ed7f46e5a51507d750e9e62d6d4e5f5d8cf809a595a3fbb5] <==
	I1123 08:45:04.559864       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:45:04.559892       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:45:04.559924       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:45:04.561557       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:45:04.574936       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:04.577070       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:45:04.599603       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:04.599623       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:45:04.599633       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:45:04.599762       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:45:04.600051       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 08:45:04.600060       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:45:04.600379       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:45:04.600388       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:45:04.600498       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:45:04.600602       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:45:04.602384       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:45:04.606100       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:45:04.606193       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:45:04.607172       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:04.608300       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:45:04.610550       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:45:04.613812       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:45:04.616068       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:45:04.628510       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4aa18c92f3f578f172c0e283a0c69d67753703f1ad1da5f13d492a4f417e49f1] <==
	I1123 08:45:03.142410       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:45:03.226982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:45:03.327668       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:45:03.327714       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 08:45:03.327881       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:45:03.350055       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:45:03.350112       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:45:03.355895       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:45:03.356336       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:45:03.356354       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:03.358302       1 config.go:309] "Starting node config controller"
	I1123 08:45:03.358995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:45:03.359014       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:45:03.358344       1 config.go:200] "Starting service config controller"
	I1123 08:45:03.359027       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:45:03.358317       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:45:03.359068       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:45:03.359273       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:45:03.359360       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:45:03.459512       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:45:03.459555       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:45:03.459581       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f9c1a46853ec5ff3a03c27f07d016527c9affe0091ecc22c9627ff73f8705db1] <==
	I1123 08:44:59.680281       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:45:01.329629       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:45:01.329742       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:01.339044       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:45:01.339483       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:45:01.339356       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:45:01.339547       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:45:01.339405       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:01.339833       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:01.339860       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:45:01.339843       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:45:01.441298       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:45:01.442351       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:45:01.442483       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 23 08:45:05 no-preload-187607 kubelet[706]: I1123 08:45:05.983459     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4ac49e3d-7eab-45e2-ab84-ef54283f4bfd-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-hcb2b\" (UID: \"4ac49e3d-7eab-45e2-ab84-ef54283f4bfd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b"
	Nov 23 08:45:05 no-preload-187607 kubelet[706]: I1123 08:45:05.983514     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjrxl\" (UniqueName: \"kubernetes.io/projected/4ac49e3d-7eab-45e2-ab84-ef54283f4bfd-kube-api-access-pjrxl\") pod \"dashboard-metrics-scraper-6ffb444bf9-hcb2b\" (UID: \"4ac49e3d-7eab-45e2-ab84-ef54283f4bfd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b"
	Nov 23 08:45:10 no-preload-187607 kubelet[706]: I1123 08:45:10.228926     706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 08:45:12 no-preload-187607 kubelet[706]: I1123 08:45:12.211159     706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c25qj" podStartSLOduration=2.7542132070000003 podStartE2EDuration="7.211135721s" podCreationTimestamp="2025-11-23 08:45:05 +0000 UTC" firstStartedPulling="2025-11-23 08:45:06.274357182 +0000 UTC m=+8.179108113" lastFinishedPulling="2025-11-23 08:45:10.731279678 +0000 UTC m=+12.636030627" observedRunningTime="2025-11-23 08:45:11.363261352 +0000 UTC m=+13.268012304" watchObservedRunningTime="2025-11-23 08:45:12.211135721 +0000 UTC m=+14.115886672"
	Nov 23 08:45:14 no-preload-187607 kubelet[706]: I1123 08:45:14.349964     706 scope.go:117] "RemoveContainer" containerID="a978dc4166a0721e1ca3efca1a9e1e1fbeacd5219eeb0819615d697554d3b861"
	Nov 23 08:45:15 no-preload-187607 kubelet[706]: I1123 08:45:15.354531     706 scope.go:117] "RemoveContainer" containerID="a978dc4166a0721e1ca3efca1a9e1e1fbeacd5219eeb0819615d697554d3b861"
	Nov 23 08:45:15 no-preload-187607 kubelet[706]: I1123 08:45:15.354752     706 scope.go:117] "RemoveContainer" containerID="8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1"
	Nov 23 08:45:15 no-preload-187607 kubelet[706]: E1123 08:45:15.354948     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:16 no-preload-187607 kubelet[706]: I1123 08:45:16.359638     706 scope.go:117] "RemoveContainer" containerID="8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1"
	Nov 23 08:45:16 no-preload-187607 kubelet[706]: E1123 08:45:16.359837     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:17 no-preload-187607 kubelet[706]: I1123 08:45:17.362737     706 scope.go:117] "RemoveContainer" containerID="8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1"
	Nov 23 08:45:17 no-preload-187607 kubelet[706]: E1123 08:45:17.362977     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:29 no-preload-187607 kubelet[706]: I1123 08:45:29.265851     706 scope.go:117] "RemoveContainer" containerID="8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1"
	Nov 23 08:45:29 no-preload-187607 kubelet[706]: I1123 08:45:29.394289     706 scope.go:117] "RemoveContainer" containerID="8bd3d49e198f1a8e001a65bf8dc5ffb51a36bbcea9ba7bd856584de795dab4e1"
	Nov 23 08:45:29 no-preload-187607 kubelet[706]: I1123 08:45:29.394514     706 scope.go:117] "RemoveContainer" containerID="9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f"
	Nov 23 08:45:29 no-preload-187607 kubelet[706]: E1123 08:45:29.394730     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:33 no-preload-187607 kubelet[706]: I1123 08:45:33.405868     706 scope.go:117] "RemoveContainer" containerID="3c52daba0a02a5f43db9a936c7bee455eaed07b8846c57f2a36e9d42a2c662b1"
	Nov 23 08:45:36 no-preload-187607 kubelet[706]: I1123 08:45:36.349258     706 scope.go:117] "RemoveContainer" containerID="9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f"
	Nov 23 08:45:36 no-preload-187607 kubelet[706]: E1123 08:45:36.349410     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:49 no-preload-187607 kubelet[706]: I1123 08:45:49.265999     706 scope.go:117] "RemoveContainer" containerID="9a28a511032a2fa829205c2903b88587377a624a585ccfe402c375255dda789f"
	Nov 23 08:45:49 no-preload-187607 kubelet[706]: E1123 08:45:49.266204     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hcb2b_kubernetes-dashboard(4ac49e3d-7eab-45e2-ab84-ef54283f4bfd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hcb2b" podUID="4ac49e3d-7eab-45e2-ab84-ef54283f4bfd"
	Nov 23 08:45:54 no-preload-187607 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:45:54 no-preload-187607 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:45:54 no-preload-187607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 08:45:54 no-preload-187607 systemd[1]: kubelet.service: Consumed 1.662s CPU time.
	
	
	==> kubernetes-dashboard [f6600a361a3baa6724f669b340ef4e64b2062295514dc30b0ed6e119477cc6b2] <==
	2025/11/23 08:45:10 Starting overwatch
	2025/11/23 08:45:10 Using namespace: kubernetes-dashboard
	2025/11/23 08:45:10 Using in-cluster config to connect to apiserver
	2025/11/23 08:45:10 Using secret token for csrf signing
	2025/11/23 08:45:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:45:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:45:10 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:45:10 Generating JWE encryption key
	2025/11/23 08:45:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:45:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:45:11 Initializing JWE encryption key from synchronized object
	2025/11/23 08:45:11 Creating in-cluster Sidecar client
	2025/11/23 08:45:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:45:11 Serving insecurely on HTTP port: 9090
	2025/11/23 08:45:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3c52daba0a02a5f43db9a936c7bee455eaed07b8846c57f2a36e9d42a2c662b1] <==
	I1123 08:45:03.110936       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:45:33.113117       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c6c270dccd32c502da3fafcf547f6f6714b0f3418167733e063f8a10411f3214] <==
	I1123 08:45:33.458417       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:45:33.466795       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:45:33.466834       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:33.468546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:36.923483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:41.183840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:44.782557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:47.836611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:50.858421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:50.863562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:50.863731       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:50.863845       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e82eb46b-b542-473b-9efe-cdbb2e96ba53", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-187607_5b79c719-1697-415d-b552-77186768d008 became leader
	I1123 08:45:50.863885       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-187607_5b79c719-1697-415d-b552-77186768d008!
	W1123 08:45:50.865507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:50.868766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:50.964065       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-187607_5b79c719-1697-415d-b552-77186768d008!
	W1123 08:45:52.872591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:52.884973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:54.889144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:54.893418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:56.897565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:56.905212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-187607 -n no-preload-187607
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-187607 -n no-preload-187607: exit status 2 (332.027702ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-187607 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-756339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-756339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.917682ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:45:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-756339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-756339 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-756339 describe deploy/metrics-server -n kube-system: exit status 1 (70.231006ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-756339 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-756339
helpers_test.go:243: (dbg) docker inspect embed-certs-756339:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f",
	        "Created": "2025-11-23T08:45:07.242299769Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 330205,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:45:07.304255818Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/hosts",
	        "LogPath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f-json.log",
	        "Name": "/embed-certs-756339",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-756339:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-756339",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f",
	                "LowerDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-756339",
	                "Source": "/var/lib/docker/volumes/embed-certs-756339/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-756339",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-756339",
	                "name.minikube.sigs.k8s.io": "embed-certs-756339",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ba12b94c816ce4c733f7600324569e73530a739f96566d10cd7f56c88bb7db98",
	            "SandboxKey": "/var/run/docker/netns/ba12b94c816c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-756339": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "081a90797f7b1b1fb1a39e8b587fd717235565d36ed01f430e48a85f0e009f66",
	                    "EndpointID": "e9dc3495a751d26f04176f6eca4fa9db7abe46db52b5561156246ead1818dd3f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "1e:18:23:98:f7:3c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-756339",
	                        "dcc19a70aae1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756339 -n embed-certs-756339
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-756339 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-756339 logs -n 25: (1.082903046s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-726261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-726261 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable metrics-server -p no-preload-187607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ stop    │ -p no-preload-187607 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p newest-cni-653361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-726261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ addons  │ enable dashboard -p no-preload-187607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ start   │ -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ newest-cni-653361 image list --format=json                                                                                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ pause   │ -p newest-cni-653361 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p newest-cni-653361                                                                                                                                                                                                                          │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-177890                                                                                                                                                                                                               │ disable-driver-mounts-177890 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-057894 image list --format=json                                                                                                                                                                                               │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-057894 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ delete  │ -p old-k8s-version-057894                                                                                                                                                                                                                     │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-057894                                                                                                                                                                                                                     │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ default-k8s-diff-port-726261 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p default-k8s-diff-port-726261 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ image   │ no-preload-187607 image list --format=json                                                                                                                                                                                                    │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p no-preload-187607 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-756339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:45:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:45:01.745123  329090 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:45:01.745432  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745440  329090 out.go:374] Setting ErrFile to fd 2...
	I1123 08:45:01.745446  329090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:45:01.745739  329090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:45:01.746375  329090 out.go:368] Setting JSON to false
	I1123 08:45:01.748064  329090 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5249,"bootTime":1763882253,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:45:01.748157  329090 start.go:143] virtualization: kvm guest
	I1123 08:45:01.750156  329090 out.go:179] * [embed-certs-756339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:45:01.753393  329090 notify.go:221] Checking for updates...
	I1123 08:45:01.753398  329090 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:45:01.755146  329090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:45:01.756598  329090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:01.757836  329090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:45:01.758954  329090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:45:01.760360  329090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:45:01.765276  329090 config.go:182] Loaded profile config "default-k8s-diff-port-726261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765522  329090 config.go:182] Loaded profile config "no-preload-187607": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:01.765681  329090 config.go:182] Loaded profile config "old-k8s-version-057894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 08:45:01.765827  329090 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:45:01.800644  329090 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:45:01.801313  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.871017  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.860213573 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.871190  329090 docker.go:319] overlay module found
	I1123 08:45:01.872879  329090 out.go:179] * Using the docker driver based on user configuration
	I1123 08:45:01.874146  329090 start.go:309] selected driver: docker
	I1123 08:45:01.874172  329090 start.go:927] validating driver "docker" against <nil>
	I1123 08:45:01.874185  329090 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:45:01.874731  329090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:45:01.950283  329090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 08:45:01.938442114 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:45:01.950526  329090 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:45:01.950805  329090 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.952251  329090 out.go:179] * Using Docker driver with root privileges
	I1123 08:45:01.953421  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:01.953493  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:01.953508  329090 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:45:01.953584  329090 start.go:353] cluster config:
	{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:01.954827  329090 out.go:179] * Starting "embed-certs-756339" primary control-plane node in "embed-certs-756339" cluster
	I1123 08:45:01.955848  329090 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:45:01.957107  329090 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:45:01.958365  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:01.958393  329090 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:45:01.958408  329090 cache.go:65] Caching tarball of preloaded images
	I1123 08:45:01.958465  329090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:45:01.958507  329090 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:45:01.958523  329090 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:45:01.958635  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:01.958661  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json: {Name:mk2bf238bbe57398e8f0e67e0ff345b4c996e47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:01.983475  329090 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:45:01.983497  329090 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:45:01.983513  329090 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:45:01.983540  329090 start.go:360] acquireMachinesLock for embed-certs-756339: {Name:mk2607c5ea38ca6bd330e0a548b36202f67f84a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:45:01.983642  329090 start.go:364] duration metric: took 84.653µs to acquireMachinesLock for "embed-certs-756339"
	I1123 08:45:01.983672  329090 start.go:93] Provisioning new machine with config: &{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:01.983792  329090 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:45:01.986901  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.692445857s)
	I1123 08:45:01.987002  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.670756175s)
	I1123 08:45:01.987136  323816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.507320621s)
	I1123 08:45:01.987186  323816 api_server.go:72] duration metric: took 2.902108336s to wait for apiserver process to appear ...
	I1123 08:45:01.987204  323816 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:01.987282  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:01.988808  323816 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-187607 addons enable metrics-server
	
	I1123 08:45:01.992707  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:45:01.992732  323816 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:45:01.994529  323816 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 08:45:01.422757  323135 addons.go:530] duration metric: took 3.555416147s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:01.910007  323135 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 08:45:01.915784  323135 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 08:45:01.917062  323135 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:01.917089  323135 api_server.go:131] duration metric: took 507.92158ms to wait for apiserver health ...
	I1123 08:45:01.917100  323135 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:01.921785  323135 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:01.921998  323135 system_pods.go:61] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.922039  323135 system_pods.go:61] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.922068  323135 system_pods.go:61] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.922079  323135 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.922087  323135 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.922095  323135 system_pods.go:61] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.922107  323135 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.922115  323135 system_pods.go:61] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.922124  323135 system_pods.go:74] duration metric: took 5.016936ms to wait for pod list to return data ...
	I1123 08:45:01.922189  323135 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:01.925409  323135 default_sa.go:45] found service account: "default"
	I1123 08:45:01.925452  323135 default_sa.go:55] duration metric: took 3.245595ms for default service account to be created ...
	I1123 08:45:01.925463  323135 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:01.931804  323135 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:01.931872  323135 system_pods.go:89] "coredns-66bc5c9577-8f8f5" [2972f876-77f7-4ac2-80df-ac460f83663e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:01.931898  323135 system_pods.go:89] "etcd-default-k8s-diff-port-726261" [4b1709d3-6ea5-4640-99c8-367feb3f7ed6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:01.931961  323135 system_pods.go:89] "kindnet-4zwgv" [9b5a136a-e2ec-4e01-b164-d48b0b01ccf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:01.931995  323135 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-726261" [1178d9a7-260e-43c4-bf7e-85797a7290ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:01.932018  323135 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-726261" [a4555292-548e-4144-aab6-8ca5d01d4a84] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:01.932037  323135 system_pods.go:89] "kube-proxy-sn4sp" [f78be2d8-1fdb-429f-be98-0cc11b6b8e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:01.932066  323135 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-726261" [59eaf31f-8288-43f9-8025-49309831de83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:01.932076  323135 system_pods.go:89] "storage-provisioner" [47dd6a2f-d285-4c11-9971-aba81adb5848] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:01.932086  323135 system_pods.go:126] duration metric: took 6.61665ms to wait for k8s-apps to be running ...
	I1123 08:45:01.932097  323135 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:01.932143  323135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:01.947263  323135 system_svc.go:56] duration metric: took 15.160659ms WaitForService to wait for kubelet
	I1123 08:45:01.947298  323135 kubeadm.go:587] duration metric: took 4.08017724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:01.947325  323135 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:01.950481  323135 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:01.950509  323135 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:01.950526  323135 node_conditions.go:105] duration metric: took 3.194245ms to run NodePressure ...
	I1123 08:45:01.950541  323135 start.go:242] waiting for startup goroutines ...
	I1123 08:45:01.950555  323135 start.go:247] waiting for cluster config update ...
	I1123 08:45:01.950571  323135 start.go:256] writing updated cluster config ...
	I1123 08:45:01.950876  323135 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:01.955038  323135 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:01.958449  323135 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:03.965246  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:01.995584  323816 addons.go:530] duration metric: took 2.910424664s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:45:02.487321  323816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 08:45:02.491678  323816 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 08:45:02.492738  323816 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:02.492762  323816 api_server.go:131] duration metric: took 505.498506ms to wait for apiserver health ...
	I1123 08:45:02.492770  323816 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:02.496254  323816 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:02.496282  323816 system_pods.go:61] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.496290  323816 system_pods.go:61] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.496296  323816 system_pods.go:61] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.496302  323816 system_pods.go:61] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.496310  323816 system_pods.go:61] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.496317  323816 system_pods.go:61] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.496324  323816 system_pods.go:61] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.496334  323816 system_pods.go:61] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.496340  323816 system_pods.go:74] duration metric: took 3.565076ms to wait for pod list to return data ...
	I1123 08:45:02.496348  323816 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:02.498409  323816 default_sa.go:45] found service account: "default"
	I1123 08:45:02.498426  323816 default_sa.go:55] duration metric: took 2.073405ms for default service account to be created ...
	I1123 08:45:02.498434  323816 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:02.500853  323816 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:02.500888  323816 system_pods.go:89] "coredns-66bc5c9577-khlrk" [e96e8ec4-1ecf-4171-b927-a3353ac88d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:02.500899  323816 system_pods.go:89] "etcd-no-preload-187607" [9b19beec-b829-425e-a6af-5e2ae605dcee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:45:02.500912  323816 system_pods.go:89] "kindnet-67c62" [073134c6-398a-4c03-9c1e-4970b98909fb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:45:02.500929  323816 system_pods.go:89] "kube-apiserver-no-preload-187607" [7bff61ce-50df-4703-b01b-4fb967bf025b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:45:02.500941  323816 system_pods.go:89] "kube-controller-manager-no-preload-187607" [8ef36cb8-685c-4e6b-9d2e-39c9312af974] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:45:02.500951  323816 system_pods.go:89] "kube-proxy-f9d8j" [3d59ac36-2289-4f2f-8c9f-110235f453ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:45:02.500961  323816 system_pods.go:89] "kube-scheduler-no-preload-187607" [746c452c-4e97-488e-86ea-0313df0eb9e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:45:02.500971  323816 system_pods.go:89] "storage-provisioner" [a02e6fe9-9deb-4a63-b887-bd353f7c37c5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:02.500978  323816 system_pods.go:126] duration metric: took 2.538671ms to wait for k8s-apps to be running ...
	I1123 08:45:02.500991  323816 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:02.501036  323816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:02.522199  323816 system_svc.go:56] duration metric: took 21.201972ms WaitForService to wait for kubelet
	I1123 08:45:02.522225  323816 kubeadm.go:587] duration metric: took 3.437147085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:02.522246  323816 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:02.524870  323816 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:02.524905  323816 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:02.524925  323816 node_conditions.go:105] duration metric: took 2.673388ms to run NodePressure ...
	I1123 08:45:02.524943  323816 start.go:242] waiting for startup goroutines ...
	I1123 08:45:02.524953  323816 start.go:247] waiting for cluster config update ...
	I1123 08:45:02.524970  323816 start.go:256] writing updated cluster config ...
	I1123 08:45:02.525241  323816 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:02.529440  323816 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:02.532956  323816 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:04.545550  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:01.985817  329090 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:45:01.986054  329090 start.go:159] libmachine.API.Create for "embed-certs-756339" (driver="docker")
	I1123 08:45:01.986094  329090 client.go:173] LocalClient.Create starting
	I1123 08:45:01.986158  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem
	I1123 08:45:01.986202  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986228  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986299  329090 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem
	I1123 08:45:01.986331  329090 main.go:143] libmachine: Decoding PEM data...
	I1123 08:45:01.986349  329090 main.go:143] libmachine: Parsing certificate...
	I1123 08:45:01.986747  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:45:02.006351  329090 cli_runner.go:211] docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:45:02.006428  329090 network_create.go:284] running [docker network inspect embed-certs-756339] to gather additional debugging logs...
	I1123 08:45:02.006453  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339
	W1123 08:45:02.024029  329090 cli_runner.go:211] docker network inspect embed-certs-756339 returned with exit code 1
	I1123 08:45:02.024056  329090 network_create.go:287] error running [docker network inspect embed-certs-756339]: docker network inspect embed-certs-756339: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-756339 not found
	I1123 08:45:02.024076  329090 network_create.go:289] output of [docker network inspect embed-certs-756339]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-756339 not found
	
	** /stderr **
	I1123 08:45:02.024188  329090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:02.041589  329090 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0e05b954e81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:02:f0:06:d5:34} reservation:<nil>}
	I1123 08:45:02.042147  329090 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f2604e536ddf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:ab:00:4e:41:e6} reservation:<nil>}
	I1123 08:45:02.042884  329090 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ce97320dd675 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:5a:a5:0b:c0:b0} reservation:<nil>}
	I1123 08:45:02.043340  329090 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c80b7bca17a7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:f1:41:59:09:b5} reservation:<nil>}
	I1123 08:45:02.043937  329090 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8e58961f3024 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b6:f0:e4:3c:63:d5} reservation:<nil>}
	I1123 08:45:02.044437  329090 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-e4a86ee726da IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ae:37:bc:fe:9d:3a} reservation:<nil>}
	I1123 08:45:02.045221  329090 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06cd0}
	I1123 08:45:02.045242  329090 network_create.go:124] attempt to create docker network embed-certs-756339 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1123 08:45:02.045287  329090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-756339 embed-certs-756339
	I1123 08:45:02.095267  329090 network_create.go:108] docker network embed-certs-756339 192.168.103.0/24 created
	I1123 08:45:02.095296  329090 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-756339" container
	I1123 08:45:02.095350  329090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:45:02.111533  329090 cli_runner.go:164] Run: docker volume create embed-certs-756339 --label name.minikube.sigs.k8s.io=embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:45:02.128824  329090 oci.go:103] Successfully created a docker volume embed-certs-756339
	I1123 08:45:02.128896  329090 cli_runner.go:164] Run: docker run --rm --name embed-certs-756339-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --entrypoint /usr/bin/test -v embed-certs-756339:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:45:02.559029  329090 oci.go:107] Successfully prepared a docker volume embed-certs-756339
	I1123 08:45:02.559098  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:02.559108  329090 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:45:02.559163  329090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 08:45:06.464312  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:08.466215  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:06.707246  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:09.040137  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:11.046122  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:07.131448  329090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-756339:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.572224972s)
	I1123 08:45:07.131484  329090 kic.go:203] duration metric: took 4.572370498s to extract preloaded images to volume ...
	W1123 08:45:07.131573  329090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:45:07.131616  329090 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:45:07.131860  329090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:45:07.219659  329090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-756339 --name embed-certs-756339 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-756339 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-756339 --network embed-certs-756339 --ip 192.168.103.2 --volume embed-certs-756339:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:45:07.635482  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Running}}
	I1123 08:45:07.658965  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.681327  329090 cli_runner.go:164] Run: docker exec embed-certs-756339 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:45:07.737769  329090 oci.go:144] the created container "embed-certs-756339" has a running status.
	I1123 08:45:07.737802  329090 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa...
	I1123 08:45:07.895228  329090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:45:07.935222  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:07.958382  329090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:45:07.958405  329090 kic_runner.go:114] Args: [docker exec --privileged embed-certs-756339 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:45:08.015520  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:08.039803  329090 machine.go:94] provisionDockerMachine start ...
	I1123 08:45:08.039898  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:08.064345  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:08.064680  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:08.064723  329090 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:45:08.065347  329090 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47890->127.0.0.1:33131: read: connection reset by peer
	I1123 08:45:11.244730  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.244755  329090 ubuntu.go:182] provisioning hostname "embed-certs-756339"
	I1123 08:45:11.244812  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.273763  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.274055  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.274072  329090 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-756339 && echo "embed-certs-756339" | sudo tee /etc/hostname
	I1123 08:45:11.457570  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:45:11.457714  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.488146  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:11.488457  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:11.488485  329090 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-756339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-756339/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-756339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:45:11.660198  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:45:11.660362  329090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:45:11.660453  329090 ubuntu.go:190] setting up certificates
	I1123 08:45:11.660471  329090 provision.go:84] configureAuth start
	I1123 08:45:11.661011  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:11.684982  329090 provision.go:143] copyHostCerts
	I1123 08:45:11.685043  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:45:11.685053  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:45:11.685140  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:45:11.685249  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:45:11.685255  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:45:11.685292  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:45:11.685383  329090 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:45:11.685391  329090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:45:11.685427  329090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:45:11.685506  329090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.embed-certs-756339 san=[127.0.0.1 192.168.103.2 embed-certs-756339 localhost minikube]
	I1123 08:45:11.758697  329090 provision.go:177] copyRemoteCerts
	I1123 08:45:11.758777  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:45:11.758833  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:11.787179  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:11.905965  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:45:11.934744  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:45:11.961707  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:45:11.985963  329090 provision.go:87] duration metric: took 325.479379ms to configureAuth
	I1123 08:45:11.985992  329090 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:45:11.986220  329090 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:11.986358  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.011499  329090 main.go:143] libmachine: Using SSH client type: native
	I1123 08:45:12.011833  329090 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33131 <nil> <nil>}
	I1123 08:45:12.011872  329090 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:45:12.373361  329090 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:45:12.373388  329090 machine.go:97] duration metric: took 4.333562614s to provisionDockerMachine
	I1123 08:45:12.373402  329090 client.go:176] duration metric: took 10.387301049s to LocalClient.Create
	I1123 08:45:12.373431  329090 start.go:167] duration metric: took 10.387376613s to libmachine.API.Create "embed-certs-756339"
	I1123 08:45:12.373444  329090 start.go:293] postStartSetup for "embed-certs-756339" (driver="docker")
	I1123 08:45:12.373458  329090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:45:12.373521  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:45:12.373575  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.394472  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.505303  329090 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:45:12.509881  329090 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:45:12.509946  329090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:45:12.509962  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:45:12.510025  329090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:45:12.510127  329090 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:45:12.510256  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:45:12.520339  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:45:12.547586  329090 start.go:296] duration metric: took 174.127267ms for postStartSetup
	I1123 08:45:12.548040  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.572325  329090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:45:12.572597  329090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:45:12.572652  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.595241  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.708576  329090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:45:12.713786  329090 start.go:128] duration metric: took 10.729979645s to createHost
	I1123 08:45:12.713812  329090 start.go:83] releasing machines lock for "embed-certs-756339", held for 10.730153164s
	I1123 08:45:12.713888  329090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:45:12.744434  329090 ssh_runner.go:195] Run: cat /version.json
	I1123 08:45:12.744496  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.744678  329090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:45:12.744776  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:12.771659  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.771722  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:12.970377  329090 ssh_runner.go:195] Run: systemctl --version
	I1123 08:45:12.980003  329090 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:45:13.031076  329090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:45:13.037986  329090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:45:13.038091  329090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:45:13.078655  329090 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:45:13.078678  329090 start.go:496] detecting cgroup driver to use...
	I1123 08:45:13.078778  329090 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:45:13.078826  329090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:45:13.102501  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:45:13.121011  329090 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:45:13.121088  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:45:13.144025  329090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:45:13.166610  329090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:45:13.266885  329090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:45:13.383738  329090 docker.go:234] disabling docker service ...
	I1123 08:45:13.383808  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:45:13.408902  329090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:45:13.425055  329090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:45:13.533375  329090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:45:13.641970  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:45:13.655349  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:45:13.672802  329090 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:45:13.672859  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.682619  329090 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:45:13.682671  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.691340  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.700633  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.709880  329090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:45:13.717844  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.726872  329090 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.741035  329090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:45:13.750011  329090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:45:13.757738  329090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:45:13.764834  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:13.846176  329090 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:45:15.041719  329090 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.195506975s)
	I1123 08:45:15.041743  329090 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:45:15.041806  329090 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:45:15.046071  329090 start.go:564] Will wait 60s for crictl version
	I1123 08:45:15.046136  329090 ssh_runner.go:195] Run: which crictl
	I1123 08:45:15.049573  329090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:45:15.078843  329090 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:45:15.078920  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.108962  329090 ssh_runner.go:195] Run: crio --version
	I1123 08:45:15.139712  329090 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1123 08:45:10.968346  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.466785  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:13.540283  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:16.038123  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:15.141197  329090 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:45:15.159501  329090 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 08:45:15.163431  329090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:15.173476  329090 kubeadm.go:884] updating cluster {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:45:15.173575  329090 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:45:15.173616  329090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:15.210172  329090 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:45:15.210193  329090 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:45:15.210244  329090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:45:15.237085  329090 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:45:15.237104  329090 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:45:15.237113  329090 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1123 08:45:15.237217  329090 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-756339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:45:15.237295  329090 ssh_runner.go:195] Run: crio config
	I1123 08:45:15.283601  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:15.283625  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:15.283643  329090 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:45:15.283669  329090 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-756339 NodeName:embed-certs-756339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:45:15.283837  329090 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-756339"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:45:15.283904  329090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:45:15.292504  329090 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:45:15.292566  329090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:45:15.300378  329090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 08:45:15.312974  329090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:45:15.327882  329090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1123 08:45:15.340181  329090 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:45:15.343646  329090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:45:15.354110  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:15.443097  329090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:15.467751  329090 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339 for IP: 192.168.103.2
	I1123 08:45:15.467775  329090 certs.go:195] generating shared ca certs ...
	I1123 08:45:15.467794  329090 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.467944  329090 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:45:15.468013  329090 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:45:15.468026  329090 certs.go:257] generating profile certs ...
	I1123 08:45:15.468092  329090 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key
	I1123 08:45:15.468108  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt with IP's: []
	I1123 08:45:15.681556  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt ...
	I1123 08:45:15.681578  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.crt: {Name:mk22797cd88ef1f778f787e25af3588a79d11855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.681755  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key ...
	I1123 08:45:15.681771  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key: {Name:mk2507e79a5f05fa7cb11db2054cd014292902df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.681880  329090 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354
	I1123 08:45:15.681896  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1123 08:45:15.727484  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 ...
	I1123 08:45:15.727506  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354: {Name:mkade0e3ba918afced6504828d64527edcb7e06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.727677  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354 ...
	I1123 08:45:15.727718  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354: {Name:mke39adf49845e1231f060e2780420238d4a87bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.727834  329090 certs.go:382] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt.11e0c354 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt
	I1123 08:45:15.727927  329090 certs.go:386] copying /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354 -> /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key
	I1123 08:45:15.728008  329090 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key
	I1123 08:45:15.728025  329090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt with IP's: []
	I1123 08:45:15.834669  329090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt ...
	I1123 08:45:15.834720  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt: {Name:mkad5e6304235e6d8f0ebd086b0ccf458022d6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.834861  329090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key ...
	I1123 08:45:15.834879  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key: {Name:mka603d9600779233619dbc354e88b03aa5d1f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:15.835045  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:45:15.835081  329090 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:45:15.835092  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:45:15.835118  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:45:15.835142  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:45:15.835178  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:45:15.835218  329090 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:45:15.835729  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:45:15.855139  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:45:15.873868  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:45:15.894547  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:45:15.912933  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:45:15.930981  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:45:15.949401  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:45:15.970429  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:45:15.989205  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:45:16.008793  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:45:16.025737  329090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:45:16.043175  329090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:45:16.055931  329090 ssh_runner.go:195] Run: openssl version
	I1123 08:45:16.061639  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:45:16.069652  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.073176  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.073220  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:45:16.108921  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:45:16.116885  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:45:16.124882  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.128591  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.128656  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:45:16.185316  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:45:16.195245  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:45:16.206667  329090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.211327  329090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.211374  329090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:45:16.251180  329090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:45:16.260175  329090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:45:16.264022  329090 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:45:16.264083  329090 kubeadm.go:401] StartCluster: {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:45:16.264171  329090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:45:16.264218  329090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:45:16.292235  329090 cri.go:89] found id: ""
	I1123 08:45:16.292292  329090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:45:16.300794  329090 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:45:16.308741  329090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:45:16.308794  329090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:45:16.316404  329090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:45:16.316422  329090 kubeadm.go:158] found existing configuration files:
	
	I1123 08:45:16.316458  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:45:16.324309  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:45:16.324349  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:45:16.332260  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:45:16.340786  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:45:16.340842  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:45:16.348658  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:45:16.358536  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:45:16.358583  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:45:16.368595  329090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:45:16.377891  329090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:45:16.377952  329090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:45:16.386029  329090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:45:16.424131  329090 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:45:16.424226  329090 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:45:16.444456  329090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:45:16.444527  329090 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:45:16.444572  329090 kubeadm.go:319] OS: Linux
	I1123 08:45:16.444654  329090 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:45:16.444763  329090 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:45:16.444824  329090 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:45:16.444916  329090 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:45:16.444986  329090 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:45:16.445059  329090 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:45:16.445128  329090 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:45:16.445197  329090 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:45:16.502432  329090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:45:16.502566  329090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:45:16.502717  329090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:45:16.512573  329090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:45:16.514857  329090 out.go:252]   - Generating certificates and keys ...
	I1123 08:45:16.514990  329090 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:45:16.515094  329090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:45:16.608081  329090 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:45:16.680528  329090 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:45:16.801156  329090 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:45:17.144723  329090 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:45:17.391838  329090 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:45:17.392042  329090 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-756339 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:45:17.447222  329090 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:45:17.447383  329090 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-756339 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 08:45:17.644625  329090 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:45:17.916674  329090 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:45:18.538498  329090 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:45:18.538728  329090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:45:18.967277  329090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:45:19.377546  329090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:45:19.559622  329090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:45:20.075738  329090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:45:20.364836  329090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:45:20.365389  329090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:45:20.380029  329090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1123 08:45:15.964678  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:18.463898  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:18.038557  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:20.040142  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:20.381602  329090 out.go:252]   - Booting up control plane ...
	I1123 08:45:20.381763  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:45:20.381900  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:45:20.382610  329090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:45:20.395865  329090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:45:20.396015  329090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:45:20.402081  329090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:45:20.402378  329090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:45:20.402436  329090 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:45:20.508331  329090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:45:20.508495  329090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:45:22.009994  329090 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501781773s
	I1123 08:45:22.014389  329090 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:45:22.014519  329090 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1123 08:45:22.014637  329090 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:45:22.014773  329090 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:45:23.091748  329090 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.077310791s
	I1123 08:45:23.589008  329090 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.574535055s
	I1123 08:45:25.015461  329090 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001048624s
	I1123 08:45:25.026445  329090 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:45:25.036344  329090 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:45:25.045136  329090 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:45:25.045341  329090 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-756339 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:45:25.052213  329090 kubeadm.go:319] [bootstrap-token] Using token: jh7osp.28agjpkabxiw65fh
	W1123 08:45:20.963406  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:22.964352  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:22.538516  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:24.539132  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:25.055029  329090 out.go:252]   - Configuring RBAC rules ...
	I1123 08:45:25.055175  329090 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:45:25.058117  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:45:25.062975  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:45:25.066360  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:45:25.069196  329090 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:45:25.071492  329090 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:45:25.419913  329090 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:45:25.836463  329090 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:45:26.420358  329090 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:45:26.421135  329090 kubeadm.go:319] 
	I1123 08:45:26.421252  329090 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:45:26.421277  329090 kubeadm.go:319] 
	I1123 08:45:26.421378  329090 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:45:26.421390  329090 kubeadm.go:319] 
	I1123 08:45:26.421426  329090 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:45:26.421521  329090 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:45:26.421603  329090 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:45:26.421620  329090 kubeadm.go:319] 
	I1123 08:45:26.421735  329090 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:45:26.421746  329090 kubeadm.go:319] 
	I1123 08:45:26.421806  329090 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:45:26.421815  329090 kubeadm.go:319] 
	I1123 08:45:26.421881  329090 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:45:26.421994  329090 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:45:26.422098  329090 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:45:26.422107  329090 kubeadm.go:319] 
	I1123 08:45:26.422206  329090 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:45:26.422316  329090 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:45:26.422325  329090 kubeadm.go:319] 
	I1123 08:45:26.422429  329090 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jh7osp.28agjpkabxiw65fh \
	I1123 08:45:26.422527  329090 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c \
	I1123 08:45:26.422562  329090 kubeadm.go:319] 	--control-plane 
	I1123 08:45:26.422571  329090 kubeadm.go:319] 
	I1123 08:45:26.422711  329090 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:45:26.422722  329090 kubeadm.go:319] 
	I1123 08:45:26.422841  329090 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jh7osp.28agjpkabxiw65fh \
	I1123 08:45:26.422947  329090 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:00e9f1f40016f42fec2339db9e85acf1c18572cc840310c2e8a1e45443dc458c 
	I1123 08:45:26.425509  329090 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:45:26.425638  329090 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:45:26.425665  329090 cni.go:84] Creating CNI manager for ""
	I1123 08:45:26.425679  329090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:45:26.427041  329090 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:45:26.427891  329090 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:45:26.432307  329090 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:45:26.432326  329090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:45:26.445364  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:45:26.642490  329090 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:45:26.642551  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:26.642592  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-756339 minikube.k8s.io/updated_at=2025_11_23T08_45_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e minikube.k8s.io/name=embed-certs-756339 minikube.k8s.io/primary=true
	I1123 08:45:26.729263  329090 ops.go:34] apiserver oom_adj: -16
	I1123 08:45:26.729393  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1123 08:45:25.464467  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:27.964097  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:26.539240  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:29.038507  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:27.229843  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:27.730298  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:28.230009  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:28.730490  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:29.229984  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:29.730299  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:30.229522  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:30.729582  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:31.230290  329090 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:45:31.293892  329090 kubeadm.go:1114] duration metric: took 4.651396638s to wait for elevateKubeSystemPrivileges
	I1123 08:45:31.293931  329090 kubeadm.go:403] duration metric: took 15.029851328s to StartCluster
	I1123 08:45:31.293953  329090 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:31.294038  329090 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:45:31.295585  329090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:45:31.295872  329090 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:45:31.295936  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:45:31.296007  329090 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:45:31.296114  329090 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-756339"
	I1123 08:45:31.296118  329090 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:45:31.296134  329090 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-756339"
	I1123 08:45:31.296128  329090 addons.go:70] Setting default-storageclass=true in profile "embed-certs-756339"
	I1123 08:45:31.296166  329090 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-756339"
	I1123 08:45:31.296176  329090 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:45:31.296604  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.296720  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.297232  329090 out.go:179] * Verifying Kubernetes components...
	I1123 08:45:31.299135  329090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:45:31.322679  329090 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:45:31.324511  329090 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:31.324536  329090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:45:31.324593  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:31.329451  329090 addons.go:239] Setting addon default-storageclass=true in "embed-certs-756339"
	I1123 08:45:31.329500  329090 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:45:31.330018  329090 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:45:31.359473  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:31.359508  329090 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:31.359523  329090 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:45:31.359576  329090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:45:31.383150  329090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33131 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:45:31.400104  329090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:45:31.438850  329090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:45:31.477184  329090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:45:31.500079  329090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:45:31.590832  329090 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 08:45:31.592356  329090 node_ready.go:35] waiting up to 6m0s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:45:31.806094  329090 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1123 08:45:30.466331  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:32.963158  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:34.963993  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	W1123 08:45:31.541665  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:34.038345  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:31.807238  329090 addons.go:530] duration metric: took 511.238501ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 08:45:32.094332  329090 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-756339" context rescaled to 1 replicas
	W1123 08:45:33.595476  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:36.094914  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:37.463401  323135 pod_ready.go:104] pod "coredns-66bc5c9577-8f8f5" is not "Ready", error: <nil>
	I1123 08:45:39.463744  323135 pod_ready.go:94] pod "coredns-66bc5c9577-8f8f5" is "Ready"
	I1123 08:45:39.463771  323135 pod_ready.go:86] duration metric: took 37.505301624s for pod "coredns-66bc5c9577-8f8f5" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.466073  323135 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.469881  323135 pod_ready.go:94] pod "etcd-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.469907  323135 pod_ready.go:86] duration metric: took 3.813451ms for pod "etcd-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.471783  323135 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.475591  323135 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.475615  323135 pod_ready.go:86] duration metric: took 3.808626ms for pod "kube-apiserver-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.477543  323135 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.662072  323135 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:39.662095  323135 pod_ready.go:86] duration metric: took 184.532328ms for pod "kube-controller-manager-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:39.861972  323135 pod_ready.go:83] waiting for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.262090  323135 pod_ready.go:94] pod "kube-proxy-sn4sp" is "Ready"
	I1123 08:45:40.262116  323135 pod_ready.go:86] duration metric: took 400.120277ms for pod "kube-proxy-sn4sp" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.462054  323135 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.862186  323135 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-726261" is "Ready"
	I1123 08:45:40.862212  323135 pod_ready.go:86] duration metric: took 400.136767ms for pod "kube-scheduler-default-k8s-diff-port-726261" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.862222  323135 pod_ready.go:40] duration metric: took 38.907156113s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:40.906296  323135 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:40.908135  323135 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-726261" cluster and "default" namespace by default
	W1123 08:45:36.537535  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	W1123 08:45:38.537920  323816 pod_ready.go:104] pod "coredns-66bc5c9577-khlrk" is not "Ready", error: <nil>
	I1123 08:45:40.537903  323816 pod_ready.go:94] pod "coredns-66bc5c9577-khlrk" is "Ready"
	I1123 08:45:40.537927  323816 pod_ready.go:86] duration metric: took 38.004948026s for pod "coredns-66bc5c9577-khlrk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.540197  323816 pod_ready.go:83] waiting for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.543594  323816 pod_ready.go:94] pod "etcd-no-preload-187607" is "Ready"
	I1123 08:45:40.543613  323816 pod_ready.go:86] duration metric: took 3.39504ms for pod "etcd-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.545430  323816 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.548523  323816 pod_ready.go:94] pod "kube-apiserver-no-preload-187607" is "Ready"
	I1123 08:45:40.548540  323816 pod_ready.go:86] duration metric: took 3.086438ms for pod "kube-apiserver-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.550144  323816 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.736784  323816 pod_ready.go:94] pod "kube-controller-manager-no-preload-187607" is "Ready"
	I1123 08:45:40.736810  323816 pod_ready.go:86] duration metric: took 186.650289ms for pod "kube-controller-manager-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:40.936965  323816 pod_ready.go:83] waiting for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:45:38.095893  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	W1123 08:45:40.595721  329090 node_ready.go:57] node "embed-certs-756339" has "Ready":"False" status (will retry)
	I1123 08:45:41.336483  323816 pod_ready.go:94] pod "kube-proxy-f9d8j" is "Ready"
	I1123 08:45:41.336508  323816 pod_ready.go:86] duration metric: took 399.518187ms for pod "kube-proxy-f9d8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.536451  323816 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.936068  323816 pod_ready.go:94] pod "kube-scheduler-no-preload-187607" is "Ready"
	I1123 08:45:41.936095  323816 pod_ready.go:86] duration metric: took 399.617585ms for pod "kube-scheduler-no-preload-187607" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:41.936110  323816 pod_ready.go:40] duration metric: took 39.406642608s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:41.977753  323816 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:41.979147  323816 out.go:179] * Done! kubectl is now configured to use "no-preload-187607" cluster and "default" namespace by default
	I1123 08:45:43.095643  329090 node_ready.go:49] node "embed-certs-756339" is "Ready"
	I1123 08:45:43.095676  329090 node_ready.go:38] duration metric: took 11.503297149s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:45:43.095722  329090 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:45:43.095787  329090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:45:43.107848  329090 api_server.go:72] duration metric: took 11.811934824s to wait for apiserver process to appear ...
	I1123 08:45:43.107869  329090 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:45:43.107884  329090 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:45:43.112629  329090 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:45:43.113413  329090 api_server.go:141] control plane version: v1.34.1
	I1123 08:45:43.113433  329090 api_server.go:131] duration metric: took 5.559653ms to wait for apiserver health ...
	I1123 08:45:43.113441  329090 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:45:43.116485  329090 system_pods.go:59] 8 kube-system pods found
	I1123 08:45:43.116510  329090 system_pods.go:61] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.116515  329090 system_pods.go:61] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.116520  329090 system_pods.go:61] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.116525  329090 system_pods.go:61] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.116532  329090 system_pods.go:61] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.116536  329090 system_pods.go:61] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.116539  329090 system_pods.go:61] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.116545  329090 system_pods.go:61] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.116550  329090 system_pods.go:74] duration metric: took 3.105251ms to wait for pod list to return data ...
	I1123 08:45:43.116558  329090 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:45:43.118523  329090 default_sa.go:45] found service account: "default"
	I1123 08:45:43.118538  329090 default_sa.go:55] duration metric: took 1.974886ms for default service account to be created ...
	I1123 08:45:43.118545  329090 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:45:43.120780  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.120802  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.120810  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.120815  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.120819  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.120826  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.120831  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.120834  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.120839  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.120863  329090 retry.go:31] will retry after 215.602357ms: missing components: kube-dns
	I1123 08:45:43.340425  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.340455  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.340462  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.340467  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.340472  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.340477  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.340480  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.340483  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.340488  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.340504  329090 retry.go:31] will retry after 325.287893ms: missing components: kube-dns
	I1123 08:45:43.668913  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:43.668952  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:45:43.668962  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:43.668971  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:43.668977  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:43.668983  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:43.668987  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:43.668993  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:43.669002  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:45:43.669025  329090 retry.go:31] will retry after 462.937798ms: missing components: kube-dns
	I1123 08:45:44.135919  329090 system_pods.go:86] 8 kube-system pods found
	I1123 08:45:44.135950  329090 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Running
	I1123 08:45:44.135957  329090 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running
	I1123 08:45:44.135962  329090 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running
	I1123 08:45:44.135967  329090 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running
	I1123 08:45:44.135972  329090 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running
	I1123 08:45:44.135977  329090 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running
	I1123 08:45:44.135983  329090 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running
	I1123 08:45:44.135988  329090 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Running
	I1123 08:45:44.135997  329090 system_pods.go:126] duration metric: took 1.017446384s to wait for k8s-apps to be running ...
	I1123 08:45:44.136008  329090 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:45:44.136053  329090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:45:44.148387  329090 system_svc.go:56] duration metric: took 12.375192ms WaitForService to wait for kubelet
	I1123 08:45:44.148408  329090 kubeadm.go:587] duration metric: took 12.85249816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:45:44.148426  329090 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:45:44.150884  329090 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:45:44.150906  329090 node_conditions.go:123] node cpu capacity is 8
	I1123 08:45:44.150923  329090 node_conditions.go:105] duration metric: took 2.493335ms to run NodePressure ...
	I1123 08:45:44.150933  329090 start.go:242] waiting for startup goroutines ...
	I1123 08:45:44.150943  329090 start.go:247] waiting for cluster config update ...
	I1123 08:45:44.150953  329090 start.go:256] writing updated cluster config ...
	I1123 08:45:44.151188  329090 ssh_runner.go:195] Run: rm -f paused
	I1123 08:45:44.154964  329090 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:44.158442  329090 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.162122  329090 pod_ready.go:94] pod "coredns-66bc5c9577-ffmn2" is "Ready"
	I1123 08:45:44.162139  329090 pod_ready.go:86] duration metric: took 3.680173ms for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.163781  329090 pod_ready.go:83] waiting for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.167030  329090 pod_ready.go:94] pod "etcd-embed-certs-756339" is "Ready"
	I1123 08:45:44.167046  329090 pod_ready.go:86] duration metric: took 3.249458ms for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.168620  329090 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.171889  329090 pod_ready.go:94] pod "kube-apiserver-embed-certs-756339" is "Ready"
	I1123 08:45:44.171905  329090 pod_ready.go:86] duration metric: took 3.265991ms for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.173681  329090 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.558804  329090 pod_ready.go:94] pod "kube-controller-manager-embed-certs-756339" is "Ready"
	I1123 08:45:44.558838  329090 pod_ready.go:86] duration metric: took 385.124392ms for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:44.759793  329090 pod_ready.go:83] waiting for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.158864  329090 pod_ready.go:94] pod "kube-proxy-npnsh" is "Ready"
	I1123 08:45:45.158887  329090 pod_ready.go:86] duration metric: took 399.071703ms for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.360200  329090 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.758770  329090 pod_ready.go:94] pod "kube-scheduler-embed-certs-756339" is "Ready"
	I1123 08:45:45.758800  329090 pod_ready.go:86] duration metric: took 398.571969ms for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:45:45.758811  329090 pod_ready.go:40] duration metric: took 1.603821403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:45:45.800049  329090 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:45:45.802064  329090 out.go:179] * Done! kubectl is now configured to use "embed-certs-756339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:45:42 embed-certs-756339 crio[788]: time="2025-11-23T08:45:42.963096897Z" level=info msg="Starting container: 668cbdf7c3379c6184fe39c9f28f0343f908488799e7758b30acf96a53f40dc4" id=80509cb9-2587-4052-9725-c3cbada8f7ce name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:42 embed-certs-756339 crio[788]: time="2025-11-23T08:45:42.964895069Z" level=info msg="Started container" PID=1866 containerID=668cbdf7c3379c6184fe39c9f28f0343f908488799e7758b30acf96a53f40dc4 description=kube-system/coredns-66bc5c9577-ffmn2/coredns id=80509cb9-2587-4052-9725-c3cbada8f7ce name=/runtime.v1.RuntimeService/StartContainer sandboxID=7cc25a077cf43230c65e013ed0037c9a406b24b068106aa433440a686f5dd201
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.238469142Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b4db450d-c928-4b7f-adcb-95d0957ff291 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.238538773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.243653769Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e6dbda5c26f87ba66d0d695cf1befbd11e0083e5e4dd8ee5e01e06989107b0fd UID:9d266def-c91d-4fd0-b04a-42a6fd90082f NetNS:/var/run/netns/b2cbe19c-651a-4217-a3f0-8a452390c893 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005bc560}] Aliases:map[]}"
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.243679007Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.252814126Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e6dbda5c26f87ba66d0d695cf1befbd11e0083e5e4dd8ee5e01e06989107b0fd UID:9d266def-c91d-4fd0-b04a-42a6fd90082f NetNS:/var/run/netns/b2cbe19c-651a-4217-a3f0-8a452390c893 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005bc560}] Aliases:map[]}"
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.252946176Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.253585551Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.254365536Z" level=info msg="Ran pod sandbox e6dbda5c26f87ba66d0d695cf1befbd11e0083e5e4dd8ee5e01e06989107b0fd with infra container: default/busybox/POD" id=b4db450d-c928-4b7f-adcb-95d0957ff291 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.255547283Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a5a9600b-a791-40c7-aa20-8ae5dd28c5ee name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.255671966Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a5a9600b-a791-40c7-aa20-8ae5dd28c5ee name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.255743925Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a5a9600b-a791-40c7-aa20-8ae5dd28c5ee name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.256475893Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=92c1b41d-1842-403c-9d3f-3405935747f6 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.26022405Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.917367587Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=92c1b41d-1842-403c-9d3f-3405935747f6 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.918053789Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1ca7c113-b96f-4c7b-99ba-9c0632242595 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.919286291Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e68623a3-6414-489b-a149-b9ada33ecfbf name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.922179339Z" level=info msg="Creating container: default/busybox/busybox" id=5b956112-bac3-40cd-9b2c-edb655ca84f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.922296991Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.925833415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.926372303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.957279184Z" level=info msg="Created container 8bde30f771164855073bdd4c886c4a9b6a21001f0c01afa8891b35f1334fda6e: default/busybox/busybox" id=5b956112-bac3-40cd-9b2c-edb655ca84f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.957813477Z" level=info msg="Starting container: 8bde30f771164855073bdd4c886c4a9b6a21001f0c01afa8891b35f1334fda6e" id=acb536c7-2bea-447a-b618-ad8cd5e7b596 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:45:46 embed-certs-756339 crio[788]: time="2025-11-23T08:45:46.959308705Z" level=info msg="Started container" PID=1946 containerID=8bde30f771164855073bdd4c886c4a9b6a21001f0c01afa8891b35f1334fda6e description=default/busybox/busybox id=acb536c7-2bea-447a-b618-ad8cd5e7b596 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6dbda5c26f87ba66d0d695cf1befbd11e0083e5e4dd8ee5e01e06989107b0fd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	8bde30f771164       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   e6dbda5c26f87       busybox                                      default
	668cbdf7c3379       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   7cc25a077cf43       coredns-66bc5c9577-ffmn2                     kube-system
	0946855db2cec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   05a69c994a47e       storage-provisioner                          kube-system
	384a3b3251b2c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   9a0a961b8d2c3       kindnet-4hsx6                                kube-system
	6f15047b0bc79       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   c7896837b2409       kube-proxy-npnsh                             kube-system
	0d76edc564c8d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   3e2e4845cc54a       kube-apiserver-embed-certs-756339            kube-system
	44827fdfef44f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   1b1b8021227df       kube-controller-manager-embed-certs-756339   kube-system
	b4ef8d41022ff       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   dc228ff18aff4       kube-scheduler-embed-certs-756339            kube-system
	5e05fab1a7f7b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   78c46f5d27045       etcd-embed-certs-756339                      kube-system
	
	
	==> coredns [668cbdf7c3379c6184fe39c9f28f0343f908488799e7758b30acf96a53f40dc4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33492 - 19436 "HINFO IN 1937583500356679025.985172472558968899. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.05809951s
	
	
	==> describe nodes <==
	Name:               embed-certs-756339
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-756339
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=embed-certs-756339
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_45_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:45:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-756339
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:45:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:45:42 +0000   Sun, 23 Nov 2025 08:45:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:45:42 +0000   Sun, 23 Nov 2025 08:45:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:45:42 +0000   Sun, 23 Nov 2025 08:45:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:45:42 +0000   Sun, 23 Nov 2025 08:45:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-756339
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d012ad2e-0684-44d6-8937-6f0e3eaafce4
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-ffmn2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-756339                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-4hsx6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-756339             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-756339    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-npnsh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-756339             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node embed-certs-756339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node embed-certs-756339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node embed-certs-756339 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node embed-certs-756339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node embed-certs-756339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node embed-certs-756339 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node embed-certs-756339 event: Registered Node embed-certs-756339 in Controller
	  Normal  NodeReady                13s                kubelet          Node embed-certs-756339 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [5e05fab1a7f7b86a5e9ca3ccbfbe2b47c172b9ebf4adfcb9a760bbac5d099eb9] <==
	{"level":"warn","ts":"2025-11-23T08:45:22.946906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:22.954786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:22.960926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:22.967505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:22.973455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:22.979600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:22.985537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:22.991875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:22.997741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.015798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.022053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.028021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.035226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.042251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.048425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.055083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.061247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.068134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.074119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.080765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.101885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.105093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.112456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.120234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:45:23.161483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33756","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:45:55 up  1:28,  0 user,  load average: 3.69, 3.74, 2.45
	Linux embed-certs-756339 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [384a3b3251b2c7947d42662bc3c28cd1947b21c5ea11f9516add2000e09daeef] <==
	I1123 08:45:31.900207       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:45:31.900482       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:45:31.900622       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:45:31.900639       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:45:31.900649       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:45:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:45:32.099604       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:45:32.099646       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:45:32.099658       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:45:32.099840       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:45:32.499813       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:45:32.499840       1 metrics.go:72] Registering metrics
	I1123 08:45:32.499908       1 controller.go:711] "Syncing nftables rules"
	I1123 08:45:42.100220       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:45:42.100285       1 main.go:301] handling current node
	I1123 08:45:52.100173       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 08:45:52.100229       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0d76edc564c8d5bbee1665e9f5d95023e1c761b636c3470ddbde47343510bb3a] <==
	E1123 08:45:23.709121       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 08:45:23.719367       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:45:23.723006       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:23.723092       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 08:45:23.728089       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:23.728424       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:45:23.912288       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:45:24.523459       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:45:24.527292       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:45:24.527310       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:45:24.936648       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:45:24.970911       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:45:25.026181       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:45:25.032383       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 08:45:25.033305       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:45:25.038744       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:45:25.545137       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:45:25.826139       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:45:25.835575       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:45:25.841532       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:45:31.295756       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 08:45:31.450246       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:31.453644       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 08:45:31.648167       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 08:45:54.025488       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:50054: use of closed network connection
	
	
	==> kube-controller-manager [44827fdfef44fdd3807dd312c5e1bbda7d30bb46711cc00b2e14a954a3fde8de] <==
	I1123 08:45:30.506062       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-756339" podCIDRs=["10.244.0.0/24"]
	I1123 08:45:30.509028       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:45:30.542651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:45:30.542668       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:30.542681       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:45:30.542701       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:45:30.543591       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:45:30.543627       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:45:30.543771       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:45:30.543920       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:45:30.544081       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:45:30.544114       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:45:30.544083       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 08:45:30.544276       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 08:45:30.544449       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:45:30.544494       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:45:30.544586       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:45:30.546221       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:45:30.546302       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:45:30.547487       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:45:30.549682       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:30.551841       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:45:30.558991       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:45:30.565428       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:45:45.494853       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6f15047b0bc79ef6d3ffc9a6197ba7d1fdb8ab296d2dbced3a7bf2dcff8c3b86] <==
	I1123 08:45:31.726382       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:45:31.816594       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:45:31.916853       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:45:31.916899       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 08:45:31.916986       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:45:32.014173       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:45:32.014226       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:45:32.019398       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:45:32.019849       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:45:32.019873       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:45:32.021171       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:45:32.021203       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:45:32.021214       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:45:32.021230       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:45:32.021255       1 config.go:200] "Starting service config controller"
	I1123 08:45:32.021275       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:45:32.021267       1 config.go:309] "Starting node config controller"
	I1123 08:45:32.021314       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:45:32.021322       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:45:32.121376       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:45:32.121400       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:45:32.121416       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b4ef8d41022ffb31ab1e1c84bf8e479771e3daaccf3f3ac3cf65ebf31b421449] <==
	I1123 08:45:23.586630       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 08:45:23.586751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:45:23.586838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:45:23.586993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:45:23.587215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:45:23.587351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:45:23.587586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:45:23.587615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:45:23.587617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:45:23.588006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:45:23.588367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:45:23.588444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:45:23.588456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:45:24.411819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:45:24.413784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 08:45:24.440674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:45:24.556194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:45:24.584585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:45:24.601496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:45:24.670472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:45:24.692704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:45:24.695611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:45:24.729507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:45:24.769378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1123 08:45:27.875779       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:45:26 embed-certs-756339 kubelet[1327]: I1123 08:45:26.707133    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-756339" podStartSLOduration=1.7071140439999999 podStartE2EDuration="1.707114044s" podCreationTimestamp="2025-11-23 08:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:26.69510124 +0000 UTC m=+1.136020438" watchObservedRunningTime="2025-11-23 08:45:26.707114044 +0000 UTC m=+1.148033242"
	Nov 23 08:45:26 embed-certs-756339 kubelet[1327]: I1123 08:45:26.715507    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-756339" podStartSLOduration=1.71549029 podStartE2EDuration="1.71549029s" podCreationTimestamp="2025-11-23 08:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:26.707344188 +0000 UTC m=+1.148263401" watchObservedRunningTime="2025-11-23 08:45:26.71549029 +0000 UTC m=+1.156409488"
	Nov 23 08:45:26 embed-certs-756339 kubelet[1327]: I1123 08:45:26.723913    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-756339" podStartSLOduration=1.723899309 podStartE2EDuration="1.723899309s" podCreationTimestamp="2025-11-23 08:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:26.715627545 +0000 UTC m=+1.156546743" watchObservedRunningTime="2025-11-23 08:45:26.723899309 +0000 UTC m=+1.164818556"
	Nov 23 08:45:30 embed-certs-756339 kubelet[1327]: I1123 08:45:30.569028    1327 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 08:45:30 embed-certs-756339 kubelet[1327]: I1123 08:45:30.569716    1327 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:45:31 embed-certs-756339 kubelet[1327]: I1123 08:45:31.373847    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfnxt\" (UniqueName: \"kubernetes.io/projected/ccaada88-aacd-436c-904f-d29f991dd2e3-kube-api-access-sfnxt\") pod \"kube-proxy-npnsh\" (UID: \"ccaada88-aacd-436c-904f-d29f991dd2e3\") " pod="kube-system/kube-proxy-npnsh"
	Nov 23 08:45:31 embed-certs-756339 kubelet[1327]: I1123 08:45:31.374039    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/98980dc0-c70d-4cf6-99cc-54bd34fbaa83-cni-cfg\") pod \"kindnet-4hsx6\" (UID: \"98980dc0-c70d-4cf6-99cc-54bd34fbaa83\") " pod="kube-system/kindnet-4hsx6"
	Nov 23 08:45:31 embed-certs-756339 kubelet[1327]: I1123 08:45:31.374324    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98980dc0-c70d-4cf6-99cc-54bd34fbaa83-lib-modules\") pod \"kindnet-4hsx6\" (UID: \"98980dc0-c70d-4cf6-99cc-54bd34fbaa83\") " pod="kube-system/kindnet-4hsx6"
	Nov 23 08:45:31 embed-certs-756339 kubelet[1327]: I1123 08:45:31.374760    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccaada88-aacd-436c-904f-d29f991dd2e3-xtables-lock\") pod \"kube-proxy-npnsh\" (UID: \"ccaada88-aacd-436c-904f-d29f991dd2e3\") " pod="kube-system/kube-proxy-npnsh"
	Nov 23 08:45:31 embed-certs-756339 kubelet[1327]: I1123 08:45:31.374901    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4xk\" (UniqueName: \"kubernetes.io/projected/98980dc0-c70d-4cf6-99cc-54bd34fbaa83-kube-api-access-wb4xk\") pod \"kindnet-4hsx6\" (UID: \"98980dc0-c70d-4cf6-99cc-54bd34fbaa83\") " pod="kube-system/kindnet-4hsx6"
	Nov 23 08:45:31 embed-certs-756339 kubelet[1327]: I1123 08:45:31.375045    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ccaada88-aacd-436c-904f-d29f991dd2e3-kube-proxy\") pod \"kube-proxy-npnsh\" (UID: \"ccaada88-aacd-436c-904f-d29f991dd2e3\") " pod="kube-system/kube-proxy-npnsh"
	Nov 23 08:45:31 embed-certs-756339 kubelet[1327]: I1123 08:45:31.375229    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccaada88-aacd-436c-904f-d29f991dd2e3-lib-modules\") pod \"kube-proxy-npnsh\" (UID: \"ccaada88-aacd-436c-904f-d29f991dd2e3\") " pod="kube-system/kube-proxy-npnsh"
	Nov 23 08:45:31 embed-certs-756339 kubelet[1327]: I1123 08:45:31.375415    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98980dc0-c70d-4cf6-99cc-54bd34fbaa83-xtables-lock\") pod \"kindnet-4hsx6\" (UID: \"98980dc0-c70d-4cf6-99cc-54bd34fbaa83\") " pod="kube-system/kindnet-4hsx6"
	Nov 23 08:45:32 embed-certs-756339 kubelet[1327]: I1123 08:45:32.698763    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-npnsh" podStartSLOduration=1.698743452 podStartE2EDuration="1.698743452s" podCreationTimestamp="2025-11-23 08:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:32.698632829 +0000 UTC m=+7.139552070" watchObservedRunningTime="2025-11-23 08:45:32.698743452 +0000 UTC m=+7.139662650"
	Nov 23 08:45:32 embed-certs-756339 kubelet[1327]: I1123 08:45:32.698886    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4hsx6" podStartSLOduration=1.698879066 podStartE2EDuration="1.698879066s" podCreationTimestamp="2025-11-23 08:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:32.69005554 +0000 UTC m=+7.130974738" watchObservedRunningTime="2025-11-23 08:45:32.698879066 +0000 UTC m=+7.139798263"
	Nov 23 08:45:42 embed-certs-756339 kubelet[1327]: I1123 08:45:42.593155    1327 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 08:45:42 embed-certs-756339 kubelet[1327]: I1123 08:45:42.647670    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ace09d0d-f2aa-4b6a-960e-1f660821a68b-tmp\") pod \"storage-provisioner\" (UID: \"ace09d0d-f2aa-4b6a-960e-1f660821a68b\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:42 embed-certs-756339 kubelet[1327]: I1123 08:45:42.647739    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twv7w\" (UniqueName: \"kubernetes.io/projected/ace09d0d-f2aa-4b6a-960e-1f660821a68b-kube-api-access-twv7w\") pod \"storage-provisioner\" (UID: \"ace09d0d-f2aa-4b6a-960e-1f660821a68b\") " pod="kube-system/storage-provisioner"
	Nov 23 08:45:42 embed-certs-756339 kubelet[1327]: I1123 08:45:42.647767    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr9vb\" (UniqueName: \"kubernetes.io/projected/de386500-381b-43aa-9998-52ac07eb6db3-kube-api-access-wr9vb\") pod \"coredns-66bc5c9577-ffmn2\" (UID: \"de386500-381b-43aa-9998-52ac07eb6db3\") " pod="kube-system/coredns-66bc5c9577-ffmn2"
	Nov 23 08:45:42 embed-certs-756339 kubelet[1327]: I1123 08:45:42.647799    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de386500-381b-43aa-9998-52ac07eb6db3-config-volume\") pod \"coredns-66bc5c9577-ffmn2\" (UID: \"de386500-381b-43aa-9998-52ac07eb6db3\") " pod="kube-system/coredns-66bc5c9577-ffmn2"
	Nov 23 08:45:43 embed-certs-756339 kubelet[1327]: I1123 08:45:43.716781    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ffmn2" podStartSLOduration=12.716762018 podStartE2EDuration="12.716762018s" podCreationTimestamp="2025-11-23 08:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:43.716718883 +0000 UTC m=+18.157638082" watchObservedRunningTime="2025-11-23 08:45:43.716762018 +0000 UTC m=+18.157681215"
	Nov 23 08:45:43 embed-certs-756339 kubelet[1327]: I1123 08:45:43.725041    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.725024101 podStartE2EDuration="12.725024101s" podCreationTimestamp="2025-11-23 08:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:43.725016794 +0000 UTC m=+18.165935992" watchObservedRunningTime="2025-11-23 08:45:43.725024101 +0000 UTC m=+18.165943298"
	Nov 23 08:45:45 embed-certs-756339 kubelet[1327]: I1123 08:45:45.968676    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmslr\" (UniqueName: \"kubernetes.io/projected/9d266def-c91d-4fd0-b04a-42a6fd90082f-kube-api-access-wmslr\") pod \"busybox\" (UID: \"9d266def-c91d-4fd0-b04a-42a6fd90082f\") " pod="default/busybox"
	Nov 23 08:45:47 embed-certs-756339 kubelet[1327]: I1123 08:45:47.725290    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.062536314 podStartE2EDuration="2.725272883s" podCreationTimestamp="2025-11-23 08:45:45 +0000 UTC" firstStartedPulling="2025-11-23 08:45:46.256065359 +0000 UTC m=+20.696984536" lastFinishedPulling="2025-11-23 08:45:46.918801929 +0000 UTC m=+21.359721105" observedRunningTime="2025-11-23 08:45:47.72502916 +0000 UTC m=+22.165948363" watchObservedRunningTime="2025-11-23 08:45:47.725272883 +0000 UTC m=+22.166192079"
	Nov 23 08:45:54 embed-certs-756339 kubelet[1327]: E1123 08:45:54.025401    1327 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55372->127.0.0.1:43323: write tcp 127.0.0.1:55372->127.0.0.1:43323: write: broken pipe
	
	
	==> storage-provisioner [0946855db2cecf0a41948629b062da201c05ab7b05b2dc3dd50ee9925a631f03] <==
	I1123 08:45:42.969891       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:45:42.977731       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:45:42.977867       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 08:45:42.980109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:42.985396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:42.985572       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:45:42.985658       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1d73ce1-e511-4dbb-abb8-24a6761d6508", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-756339_aa920d8e-dd74-476e-ae28-d4c70fa7a430 became leader
	I1123 08:45:42.985748       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-756339_aa920d8e-dd74-476e-ae28-d4c70fa7a430!
	W1123 08:45:42.988824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:42.992000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 08:45:43.086355       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-756339_aa920d8e-dd74-476e-ae28-d4c70fa7a430!
	W1123 08:45:44.994212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:44.997903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:47.000849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:47.003998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:49.006669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:49.010174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:51.013386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:51.017540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.020839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:53.024468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:55.027881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:45:55.032594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756339 -n embed-certs-756339
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-756339 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-756339 --alsologtostderr -v=1
E1123 08:46:49.486956   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/custom-flannel-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-756339 --alsologtostderr -v=1: exit status 80 (2.366979485s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-756339 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:46:49.499068  343573 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:46:49.499177  343573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:49.499190  343573 out.go:374] Setting ErrFile to fd 2...
	I1123 08:46:49.499197  343573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:49.499357  343573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:46:49.499562  343573 out.go:368] Setting JSON to false
	I1123 08:46:49.499582  343573 mustload.go:66] Loading cluster: embed-certs-756339
	I1123 08:46:49.499894  343573 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:46:49.500275  343573 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:49.517880  343573 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:46:49.518098  343573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:49.577753  343573 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-23 08:46:49.568209377 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:49.578520  343573 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-756339 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 08:46:49.580566  343573 out.go:179] * Pausing node embed-certs-756339 ... 
	I1123 08:46:49.582009  343573 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:46:49.582319  343573 ssh_runner.go:195] Run: systemctl --version
	I1123 08:46:49.582366  343573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:49.599795  343573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:49.697505  343573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:46:49.708575  343573 pause.go:52] kubelet running: true
	I1123 08:46:49.708631  343573 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:46:49.864010  343573 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:46:49.864082  343573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:46:49.926631  343573 cri.go:89] found id: "bbbf5811d98e599301ff4819a115c3d8ef0030269a4475f9e2870b48cf71a5a6"
	I1123 08:46:49.926658  343573 cri.go:89] found id: "114c2a65428abda378156b9d44f78ab253febe754033d2ee3d3e166424ad8c09"
	I1123 08:46:49.926665  343573 cri.go:89] found id: "40a2d025621f3f6b23ef4784628f70db522cb9678d3dc68a626456ef60906012"
	I1123 08:46:49.926670  343573 cri.go:89] found id: "667faaf0e8e58c57d611bef19454dabbf3702a12f04127678f55004f0d720ff5"
	I1123 08:46:49.926674  343573 cri.go:89] found id: "fecd94a1c38bb8309076aa16021357b70119445d767154a26c2dff547a65ebbc"
	I1123 08:46:49.926680  343573 cri.go:89] found id: "9fedd2b23bc112a664b2a93370ec35729bf335846cb1900ce075ccb4249a78bc"
	I1123 08:46:49.926697  343573 cri.go:89] found id: "4ff763a88033c7e27e70d73ddb7f7e5a0438c94735aa92da4b55a18a0ee6a230"
	I1123 08:46:49.926702  343573 cri.go:89] found id: "e5104273ac6da10ca351abb35f39ede96c9db87366edff5ea0c38cceb92ced59"
	I1123 08:46:49.926706  343573 cri.go:89] found id: "01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a"
	I1123 08:46:49.926721  343573 cri.go:89] found id: "2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0"
	I1123 08:46:49.926730  343573 cri.go:89] found id: ""
	I1123 08:46:49.926778  343573 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:46:49.937823  343573 retry.go:31] will retry after 281.750368ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:46:49Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:46:50.220433  343573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:46:50.232543  343573 pause.go:52] kubelet running: false
	I1123 08:46:50.232604  343573 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:46:50.368474  343573 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:46:50.368571  343573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:46:50.431251  343573 cri.go:89] found id: "bbbf5811d98e599301ff4819a115c3d8ef0030269a4475f9e2870b48cf71a5a6"
	I1123 08:46:50.431273  343573 cri.go:89] found id: "114c2a65428abda378156b9d44f78ab253febe754033d2ee3d3e166424ad8c09"
	I1123 08:46:50.431278  343573 cri.go:89] found id: "40a2d025621f3f6b23ef4784628f70db522cb9678d3dc68a626456ef60906012"
	I1123 08:46:50.431283  343573 cri.go:89] found id: "667faaf0e8e58c57d611bef19454dabbf3702a12f04127678f55004f0d720ff5"
	I1123 08:46:50.431288  343573 cri.go:89] found id: "fecd94a1c38bb8309076aa16021357b70119445d767154a26c2dff547a65ebbc"
	I1123 08:46:50.431292  343573 cri.go:89] found id: "9fedd2b23bc112a664b2a93370ec35729bf335846cb1900ce075ccb4249a78bc"
	I1123 08:46:50.431296  343573 cri.go:89] found id: "4ff763a88033c7e27e70d73ddb7f7e5a0438c94735aa92da4b55a18a0ee6a230"
	I1123 08:46:50.431300  343573 cri.go:89] found id: "e5104273ac6da10ca351abb35f39ede96c9db87366edff5ea0c38cceb92ced59"
	I1123 08:46:50.431303  343573 cri.go:89] found id: "01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a"
	I1123 08:46:50.431315  343573 cri.go:89] found id: "2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0"
	I1123 08:46:50.431319  343573 cri.go:89] found id: ""
	I1123 08:46:50.431366  343573 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:46:50.442620  343573 retry.go:31] will retry after 509.415683ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:46:50Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:46:50.952209  343573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:46:50.964541  343573 pause.go:52] kubelet running: false
	I1123 08:46:50.964597  343573 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:46:51.099645  343573 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:46:51.099745  343573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:46:51.164539  343573 cri.go:89] found id: "bbbf5811d98e599301ff4819a115c3d8ef0030269a4475f9e2870b48cf71a5a6"
	I1123 08:46:51.164569  343573 cri.go:89] found id: "114c2a65428abda378156b9d44f78ab253febe754033d2ee3d3e166424ad8c09"
	I1123 08:46:51.164575  343573 cri.go:89] found id: "40a2d025621f3f6b23ef4784628f70db522cb9678d3dc68a626456ef60906012"
	I1123 08:46:51.164579  343573 cri.go:89] found id: "667faaf0e8e58c57d611bef19454dabbf3702a12f04127678f55004f0d720ff5"
	I1123 08:46:51.164582  343573 cri.go:89] found id: "fecd94a1c38bb8309076aa16021357b70119445d767154a26c2dff547a65ebbc"
	I1123 08:46:51.164585  343573 cri.go:89] found id: "9fedd2b23bc112a664b2a93370ec35729bf335846cb1900ce075ccb4249a78bc"
	I1123 08:46:51.164588  343573 cri.go:89] found id: "4ff763a88033c7e27e70d73ddb7f7e5a0438c94735aa92da4b55a18a0ee6a230"
	I1123 08:46:51.164591  343573 cri.go:89] found id: "e5104273ac6da10ca351abb35f39ede96c9db87366edff5ea0c38cceb92ced59"
	I1123 08:46:51.164594  343573 cri.go:89] found id: "01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a"
	I1123 08:46:51.164599  343573 cri.go:89] found id: "2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0"
	I1123 08:46:51.164602  343573 cri.go:89] found id: ""
	I1123 08:46:51.164636  343573 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:46:51.175929  343573 retry.go:31] will retry after 400.80971ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:46:51Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:46:51.577562  343573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:46:51.590177  343573 pause.go:52] kubelet running: false
	I1123 08:46:51.590230  343573 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 08:46:51.726357  343573 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 08:46:51.726457  343573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 08:46:51.788923  343573 cri.go:89] found id: "bbbf5811d98e599301ff4819a115c3d8ef0030269a4475f9e2870b48cf71a5a6"
	I1123 08:46:51.788953  343573 cri.go:89] found id: "114c2a65428abda378156b9d44f78ab253febe754033d2ee3d3e166424ad8c09"
	I1123 08:46:51.788962  343573 cri.go:89] found id: "40a2d025621f3f6b23ef4784628f70db522cb9678d3dc68a626456ef60906012"
	I1123 08:46:51.788968  343573 cri.go:89] found id: "667faaf0e8e58c57d611bef19454dabbf3702a12f04127678f55004f0d720ff5"
	I1123 08:46:51.788971  343573 cri.go:89] found id: "fecd94a1c38bb8309076aa16021357b70119445d767154a26c2dff547a65ebbc"
	I1123 08:46:51.788975  343573 cri.go:89] found id: "9fedd2b23bc112a664b2a93370ec35729bf335846cb1900ce075ccb4249a78bc"
	I1123 08:46:51.788977  343573 cri.go:89] found id: "4ff763a88033c7e27e70d73ddb7f7e5a0438c94735aa92da4b55a18a0ee6a230"
	I1123 08:46:51.788986  343573 cri.go:89] found id: "e5104273ac6da10ca351abb35f39ede96c9db87366edff5ea0c38cceb92ced59"
	I1123 08:46:51.788989  343573 cri.go:89] found id: "01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a"
	I1123 08:46:51.788995  343573 cri.go:89] found id: "2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0"
	I1123 08:46:51.788998  343573 cri.go:89] found id: ""
	I1123 08:46:51.789035  343573 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:46:51.802929  343573 out.go:203] 
	W1123 08:46:51.804177  343573 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:46:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:46:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:46:51.804194  343573 out.go:285] * 
	* 
	W1123 08:46:51.808107  343573 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:46:51.809288  343573 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-756339 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-756339
helpers_test.go:243: (dbg) docker inspect embed-certs-756339:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f",
	        "Created": "2025-11-23T08:45:07.242299769Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341199,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:46:13.114614168Z",
	            "FinishedAt": "2025-11-23T08:46:12.325342638Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/hosts",
	        "LogPath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f-json.log",
	        "Name": "/embed-certs-756339",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-756339:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-756339",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f",
	                "LowerDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-756339",
	                "Source": "/var/lib/docker/volumes/embed-certs-756339/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-756339",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-756339",
	                "name.minikube.sigs.k8s.io": "embed-certs-756339",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f0f4b2562930a14e63683bedbd410bc6d874fe623ce5e52b80ca1269ebf0b5c4",
	            "SandboxKey": "/var/run/docker/netns/f0f4b2562930",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-756339": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "081a90797f7b1b1fb1a39e8b587fd717235565d36ed01f430e48a85f0e009f66",
	                    "EndpointID": "b4cfc92e522c83babad2b52e908ac798eb73d475355e2d8ca98b97fb33b4d1fb",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "2e:fe:71:bf:75:01",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-756339",
	                        "dcc19a70aae1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756339 -n embed-certs-756339
E1123 08:46:52.048908   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/custom-flannel-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756339 -n embed-certs-756339: exit status 2 (309.746133ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-756339 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ newest-cni-653361 image list --format=json                                                                                                              │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ pause   │ -p newest-cni-653361 --alsologtostderr -v=1                                                                                                             │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ delete  │ -p newest-cni-653361                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p newest-cni-653361                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-177890                                                                                                                         │ disable-driver-mounts-177890 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1  │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-057894 image list --format=json                                                                                                         │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-057894 --alsologtostderr -v=1                                                                                                        │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ delete  │ -p old-k8s-version-057894                                                                                                                               │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-057894                                                                                                                               │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ default-k8s-diff-port-726261 image list --format=json                                                                                                   │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p default-k8s-diff-port-726261 --alsologtostderr -v=1                                                                                                  │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ image   │ no-preload-187607 image list --format=json                                                                                                              │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p no-preload-187607 --alsologtostderr -v=1                                                                                                             │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-756339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ stop    │ -p embed-certs-756339 --alsologtostderr -v=3                                                                                                            │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p default-k8s-diff-port-726261                                                                                                                         │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p no-preload-187607                                                                                                                                    │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p default-k8s-diff-port-726261                                                                                                                         │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p no-preload-187607                                                                                                                                    │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ addons  │ enable dashboard -p embed-certs-756339 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                           │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1  │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ image   │ embed-certs-756339 image list --format=json                                                                                                             │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ pause   │ -p embed-certs-756339 --alsologtostderr -v=1                                                                                                            │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:46:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:46:12.901978  340997 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:46:12.902225  340997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:12.902234  340997 out.go:374] Setting ErrFile to fd 2...
	I1123 08:46:12.902238  340997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:12.902407  340997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:46:12.902818  340997 out.go:368] Setting JSON to false
	I1123 08:46:12.903660  340997 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5320,"bootTime":1763882253,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:46:12.903721  340997 start.go:143] virtualization: kvm guest
	I1123 08:46:12.905664  340997 out.go:179] * [embed-certs-756339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:46:12.907079  340997 notify.go:221] Checking for updates...
	I1123 08:46:12.907094  340997 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:46:12.908152  340997 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:46:12.909235  340997 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:46:12.910245  340997 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:46:12.911279  340997 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:46:12.912251  340997 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:46:12.913722  340997 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:46:12.914183  340997 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:46:12.936695  340997 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:46:12.936778  340997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:12.991437  340997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-23 08:46:12.982256299 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:12.991532  340997 docker.go:319] overlay module found
	I1123 08:46:12.993139  340997 out.go:179] * Using the docker driver based on existing profile
	I1123 08:46:12.994335  340997 start.go:309] selected driver: docker
	I1123 08:46:12.994347  340997 start.go:927] validating driver "docker" against &{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:12.994423  340997 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:46:12.995005  340997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:13.047993  340997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-23 08:46:13.038713525 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:13.048270  340997 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:46:13.048298  340997 cni.go:84] Creating CNI manager for ""
	I1123 08:46:13.048348  340997 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:46:13.048388  340997 start.go:353] cluster config:
	{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:13.050048  340997 out.go:179] * Starting "embed-certs-756339" primary control-plane node in "embed-certs-756339" cluster
	I1123 08:46:13.051091  340997 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:46:13.052135  340997 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:46:13.053175  340997 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:46:13.053207  340997 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:46:13.053217  340997 cache.go:65] Caching tarball of preloaded images
	I1123 08:46:13.053244  340997 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:46:13.053300  340997 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:46:13.053314  340997 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:46:13.053423  340997 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:46:13.072755  340997 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:46:13.072770  340997 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:46:13.072785  340997 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:46:13.072817  340997 start.go:360] acquireMachinesLock for embed-certs-756339: {Name:mk2607c5ea38ca6bd330e0a548b36202f67f84a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:46:13.072885  340997 start.go:364] duration metric: took 38.187µs to acquireMachinesLock for "embed-certs-756339"
	I1123 08:46:13.072906  340997 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:46:13.072915  340997 fix.go:54] fixHost starting: 
	I1123 08:46:13.073130  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:13.089147  340997 fix.go:112] recreateIfNeeded on embed-certs-756339: state=Stopped err=<nil>
	W1123 08:46:13.089179  340997 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:46:13.090669  340997 out.go:252] * Restarting existing docker container for "embed-certs-756339" ...
	I1123 08:46:13.090746  340997 cli_runner.go:164] Run: docker start embed-certs-756339
	I1123 08:46:13.347661  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:13.365299  340997 kic.go:430] container "embed-certs-756339" state is running.
	I1123 08:46:13.365726  340997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:46:13.382955  340997 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:46:13.383157  340997 machine.go:94] provisionDockerMachine start ...
	I1123 08:46:13.383243  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:13.400993  340997 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:13.401268  340997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I1123 08:46:13.401284  340997 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:46:13.401993  340997 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50720->127.0.0.1:33136: read: connection reset by peer
	I1123 08:46:16.543102  340997 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:46:16.543137  340997 ubuntu.go:182] provisioning hostname "embed-certs-756339"
	I1123 08:46:16.543217  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:16.560360  340997 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:16.560584  340997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I1123 08:46:16.560601  340997 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-756339 && echo "embed-certs-756339" | sudo tee /etc/hostname
	I1123 08:46:16.707400  340997 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:46:16.707471  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:16.724835  340997 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:16.725052  340997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I1123 08:46:16.725075  340997 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-756339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-756339/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-756339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:46:16.863396  340997 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:46:16.863423  340997 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:46:16.863438  340997 ubuntu.go:190] setting up certificates
	I1123 08:46:16.863454  340997 provision.go:84] configureAuth start
	I1123 08:46:16.863517  340997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:46:16.880826  340997 provision.go:143] copyHostCerts
	I1123 08:46:16.880886  340997 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:46:16.880903  340997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:46:16.880968  340997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:46:16.881060  340997 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:46:16.881096  340997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:46:16.881127  340997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:46:16.881187  340997 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:46:16.881202  340997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:46:16.881233  340997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:46:16.881281  340997 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.embed-certs-756339 san=[127.0.0.1 192.168.103.2 embed-certs-756339 localhost minikube]
	I1123 08:46:17.077587  340997 provision.go:177] copyRemoteCerts
	I1123 08:46:17.077645  340997 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:46:17.077677  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.095052  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:17.194032  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:46:17.209980  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:46:17.225913  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:46:17.241378  340997 provision.go:87] duration metric: took 377.915171ms to configureAuth
	I1123 08:46:17.241402  340997 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:46:17.241626  340997 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:46:17.241760  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.259214  340997 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:17.259443  340997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I1123 08:46:17.259461  340997 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:46:17.567557  340997 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:46:17.567580  340997 machine.go:97] duration metric: took 4.184402014s to provisionDockerMachine
	I1123 08:46:17.567594  340997 start.go:293] postStartSetup for "embed-certs-756339" (driver="docker")
	I1123 08:46:17.567606  340997 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:46:17.567658  340997 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:46:17.567735  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.586353  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:17.685006  340997 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:46:17.688100  340997 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:46:17.688129  340997 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:46:17.688139  340997 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:46:17.688181  340997 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:46:17.688248  340997 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:46:17.688336  340997 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:46:17.695279  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:46:17.710930  340997 start.go:296] duration metric: took 143.32384ms for postStartSetup
	I1123 08:46:17.710989  340997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:46:17.711055  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.728089  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:17.822936  340997 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:46:17.827093  340997 fix.go:56] duration metric: took 4.754171713s for fixHost
	I1123 08:46:17.827116  340997 start.go:83] releasing machines lock for "embed-certs-756339", held for 4.754217721s
	I1123 08:46:17.827178  340997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:46:17.845055  340997 ssh_runner.go:195] Run: cat /version.json
	I1123 08:46:17.845115  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.845158  340997 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:46:17.845228  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.862337  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:17.862680  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:17.957636  340997 ssh_runner.go:195] Run: systemctl --version
	I1123 08:46:18.011929  340997 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:46:18.043580  340997 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:46:18.047789  340997 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:46:18.047841  340997 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:46:18.055036  340997 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:46:18.055051  340997 start.go:496] detecting cgroup driver to use...
	I1123 08:46:18.055075  340997 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:46:18.055115  340997 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:46:18.068728  340997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:46:18.079363  340997 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:46:18.079399  340997 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:46:18.091759  340997 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:46:18.102275  340997 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:46:18.176082  340997 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:46:18.252373  340997 docker.go:234] disabling docker service ...
	I1123 08:46:18.252443  340997 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:46:18.265014  340997 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:46:18.276152  340997 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:46:18.350567  340997 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:46:18.427370  340997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:46:18.439133  340997 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:46:18.452221  340997 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:46:18.452263  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.460219  340997 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:46:18.460267  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.468113  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.475725  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.483431  340997 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:46:18.490536  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.498413  340997 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.505801  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.513378  340997 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:46:18.520052  340997 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:46:18.526551  340997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:18.601144  340997 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:46:18.732615  340997 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:46:18.732676  340997 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:46:18.736327  340997 start.go:564] Will wait 60s for crictl version
	I1123 08:46:18.736373  340997 ssh_runner.go:195] Run: which crictl
	I1123 08:46:18.739678  340997 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:46:18.761490  340997 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:46:18.761555  340997 ssh_runner.go:195] Run: crio --version
	I1123 08:46:18.786991  340997 ssh_runner.go:195] Run: crio --version
	I1123 08:46:18.813897  340997 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:46:18.814827  340997 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:46:18.831725  340997 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 08:46:18.835472  340997 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:46:18.845241  340997 kubeadm.go:884] updating cluster {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:46:18.845350  340997 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:46:18.845392  340997 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:46:18.877554  340997 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:46:18.877572  340997 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:46:18.877613  340997 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:46:18.899912  340997 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:46:18.899933  340997 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:46:18.899942  340997 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1123 08:46:18.900046  340997 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-756339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:46:18.900102  340997 ssh_runner.go:195] Run: crio config
	I1123 08:46:18.942299  340997 cni.go:84] Creating CNI manager for ""
	I1123 08:46:18.942315  340997 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:46:18.942329  340997 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:46:18.942348  340997 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-756339 NodeName:embed-certs-756339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:46:18.942473  340997 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-756339"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:46:18.942521  340997 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:46:18.949776  340997 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:46:18.949819  340997 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:46:18.956734  340997 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 08:46:18.968094  340997 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:46:18.979264  340997 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1123 08:46:18.990260  340997 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:46:18.993443  340997 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:46:19.002330  340997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:19.079500  340997 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:46:19.102523  340997 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339 for IP: 192.168.103.2
	I1123 08:46:19.102541  340997 certs.go:195] generating shared ca certs ...
	I1123 08:46:19.102557  340997 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:19.102709  340997 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:46:19.102769  340997 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:46:19.102784  340997 certs.go:257] generating profile certs ...
	I1123 08:46:19.102901  340997 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key
	I1123 08:46:19.102972  340997 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354
	I1123 08:46:19.103028  340997 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key
	I1123 08:46:19.103176  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:46:19.103222  340997 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:46:19.103237  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:46:19.103274  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:46:19.103309  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:46:19.103345  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:46:19.103403  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:46:19.104130  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:46:19.120712  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:46:19.137600  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:46:19.154548  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:46:19.174779  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:46:19.192961  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:46:19.208896  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:46:19.224356  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:46:19.239792  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:46:19.255286  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:46:19.270992  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:46:19.287614  340997 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:46:19.299010  340997 ssh_runner.go:195] Run: openssl version
	I1123 08:46:19.304451  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:46:19.311897  340997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:19.315017  340997 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:19.315054  340997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:19.348176  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:46:19.354957  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:46:19.362275  340997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:46:19.365623  340997 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:46:19.365658  340997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:46:19.398414  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:46:19.405243  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:46:19.412673  340997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:46:19.415933  340997 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:46:19.415965  340997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:46:19.448374  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:46:19.455326  340997 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:46:19.458664  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:46:19.492615  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:46:19.525027  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:46:19.557519  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:46:19.591434  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:46:19.632193  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:46:19.681257  340997 kubeadm.go:401] StartCluster: {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:19.681371  340997 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:46:19.681461  340997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:46:19.716956  340997 cri.go:89] found id: "fecd94a1c38bb8309076aa16021357b70119445d767154a26c2dff547a65ebbc"
	I1123 08:46:19.717001  340997 cri.go:89] found id: "9fedd2b23bc112a664b2a93370ec35729bf335846cb1900ce075ccb4249a78bc"
	I1123 08:46:19.717008  340997 cri.go:89] found id: "4ff763a88033c7e27e70d73ddb7f7e5a0438c94735aa92da4b55a18a0ee6a230"
	I1123 08:46:19.717013  340997 cri.go:89] found id: "e5104273ac6da10ca351abb35f39ede96c9db87366edff5ea0c38cceb92ced59"
	I1123 08:46:19.717017  340997 cri.go:89] found id: ""
	I1123 08:46:19.717067  340997 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 08:46:19.732189  340997 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:46:19Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:46:19.732264  340997 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:46:19.742034  340997 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:46:19.742051  340997 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:46:19.742107  340997 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:46:19.749351  340997 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:46:19.749799  340997 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-756339" does not appear in /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:46:19.749905  340997 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-10964/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-756339" cluster setting kubeconfig missing "embed-certs-756339" context setting]
	I1123 08:46:19.750131  340997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:19.751281  340997 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:46:19.758432  340997 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1123 08:46:19.758465  340997 kubeadm.go:602] duration metric: took 16.40727ms to restartPrimaryControlPlane
	I1123 08:46:19.758478  340997 kubeadm.go:403] duration metric: took 77.234552ms to StartCluster
	I1123 08:46:19.758496  340997 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:19.758559  340997 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:46:19.759286  340997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:19.759501  340997 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:46:19.759557  340997 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:46:19.759639  340997 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-756339"
	I1123 08:46:19.759655  340997 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-756339"
	W1123 08:46:19.759661  340997 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:46:19.759681  340997 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:46:19.759691  340997 addons.go:70] Setting dashboard=true in profile "embed-certs-756339"
	I1123 08:46:19.759711  340997 addons.go:239] Setting addon dashboard=true in "embed-certs-756339"
	W1123 08:46:19.759720  340997 addons.go:248] addon dashboard should already be in state true
	I1123 08:46:19.759731  340997 addons.go:70] Setting default-storageclass=true in profile "embed-certs-756339"
	I1123 08:46:19.759751  340997 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:46:19.759753  340997 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:46:19.759759  340997 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-756339"
	I1123 08:46:19.760065  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:19.760123  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:19.760305  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:19.761223  340997 out.go:179] * Verifying Kubernetes components...
	I1123 08:46:19.762405  340997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:19.785674  340997 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 08:46:19.786594  340997 addons.go:239] Setting addon default-storageclass=true in "embed-certs-756339"
	W1123 08:46:19.786615  340997 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:46:19.786640  340997 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:46:19.786717  340997 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:46:19.787140  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:19.787631  340997 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:46:19.787716  340997 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:46:19.787737  340997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:46:19.787784  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:19.788465  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:46:19.788478  340997 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:46:19.788521  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:19.818114  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:19.819304  340997 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:46:19.819328  340997 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:46:19.819398  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:19.822764  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:19.842872  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:19.905366  340997 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:46:19.918039  340997 node_ready.go:35] waiting up to 6m0s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:46:19.934162  340997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:46:19.938347  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:46:19.938366  340997 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:46:19.953067  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:46:19.953081  340997 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:46:19.956161  340997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:46:19.968408  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:46:19.968423  340997 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:46:19.982658  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:46:19.982673  340997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:46:19.995163  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:46:19.995305  340997 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:46:20.007971  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:46:20.007987  340997 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:46:20.020578  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:46:20.020615  340997 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:46:20.034464  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:46:20.034481  340997 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:46:20.049019  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:46:20.049034  340997 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:46:20.060459  340997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:46:21.160673  340997 node_ready.go:49] node "embed-certs-756339" is "Ready"
	I1123 08:46:21.160722  340997 node_ready.go:38] duration metric: took 1.242651635s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:46:21.160739  340997 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:46:21.160796  340997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:46:21.630247  340997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.696054158s)
	I1123 08:46:21.630307  340997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.674112649s)
	I1123 08:46:21.630388  340997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.569905342s)
	I1123 08:46:21.630446  340997 api_server.go:72] duration metric: took 1.870911862s to wait for apiserver process to appear ...
	I1123 08:46:21.630544  340997 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:46:21.630564  340997 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:46:21.635119  340997 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-756339 addons enable metrics-server
	
	I1123 08:46:21.637098  340997 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:46:21.637120  340997 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:46:21.642667  340997 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 08:46:21.643646  340997 addons.go:530] duration metric: took 1.884096148s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:46:22.131207  340997 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:46:22.135178  340997 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:46:22.135221  340997 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:46:22.630838  340997 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:46:22.635447  340997 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:46:22.636270  340997 api_server.go:141] control plane version: v1.34.1
	I1123 08:46:22.636294  340997 api_server.go:131] duration metric: took 1.005743595s to wait for apiserver health ...
	I1123 08:46:22.636304  340997 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:46:22.639251  340997 system_pods.go:59] 8 kube-system pods found
	I1123 08:46:22.639277  340997 system_pods.go:61] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:46:22.639284  340997 system_pods.go:61] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:46:22.639292  340997 system_pods.go:61] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:46:22.639298  340997 system_pods.go:61] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:46:22.639303  340997 system_pods.go:61] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:46:22.639308  340997 system_pods.go:61] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:46:22.639313  340997 system_pods.go:61] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:46:22.639318  340997 system_pods.go:61] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:46:22.639327  340997 system_pods.go:74] duration metric: took 3.016997ms to wait for pod list to return data ...
	I1123 08:46:22.639333  340997 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:46:22.641350  340997 default_sa.go:45] found service account: "default"
	I1123 08:46:22.641367  340997 default_sa.go:55] duration metric: took 2.02915ms for default service account to be created ...
	I1123 08:46:22.641374  340997 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:46:22.645584  340997 system_pods.go:86] 8 kube-system pods found
	I1123 08:46:22.645610  340997 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:46:22.645617  340997 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:46:22.645624  340997 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:46:22.645629  340997 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:46:22.645634  340997 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:46:22.645643  340997 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:46:22.645647  340997 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:46:22.645654  340997 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:46:22.645661  340997 system_pods.go:126] duration metric: took 4.281367ms to wait for k8s-apps to be running ...
	I1123 08:46:22.645669  340997 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:46:22.645720  340997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:46:22.658417  340997 system_svc.go:56] duration metric: took 12.742319ms WaitForService to wait for kubelet
	I1123 08:46:22.658442  340997 kubeadm.go:587] duration metric: took 2.898909117s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:46:22.658463  340997 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:46:22.660710  340997 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:46:22.660735  340997 node_conditions.go:123] node cpu capacity is 8
	I1123 08:46:22.660759  340997 node_conditions.go:105] duration metric: took 2.282592ms to run NodePressure ...
	I1123 08:46:22.660777  340997 start.go:242] waiting for startup goroutines ...
	I1123 08:46:22.660791  340997 start.go:247] waiting for cluster config update ...
	I1123 08:46:22.660806  340997 start.go:256] writing updated cluster config ...
	I1123 08:46:22.661057  340997 ssh_runner.go:195] Run: rm -f paused
	I1123 08:46:22.664524  340997 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:46:22.667090  340997 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:46:24.673192  340997 pod_ready.go:104] pod "coredns-66bc5c9577-ffmn2" is not "Ready", error: node "embed-certs-756339" hosting pod "coredns-66bc5c9577-ffmn2" is not "Ready" (will retry)
	W1123 08:46:27.172047  340997 pod_ready.go:104] pod "coredns-66bc5c9577-ffmn2" is not "Ready", error: node "embed-certs-756339" hosting pod "coredns-66bc5c9577-ffmn2" is not "Ready" (will retry)
	W1123 08:46:29.671966  340997 pod_ready.go:104] pod "coredns-66bc5c9577-ffmn2" is not "Ready", error: node "embed-certs-756339" hosting pod "coredns-66bc5c9577-ffmn2" is not "Ready" (will retry)
	W1123 08:46:32.173085  340997 pod_ready.go:104] pod "coredns-66bc5c9577-ffmn2" is not "Ready", error: <nil>
	W1123 08:46:34.673334  340997 pod_ready.go:104] pod "coredns-66bc5c9577-ffmn2" is not "Ready", error: <nil>
	I1123 08:46:35.173129  340997 pod_ready.go:94] pod "coredns-66bc5c9577-ffmn2" is "Ready"
	I1123 08:46:35.173158  340997 pod_ready.go:86] duration metric: took 12.506047472s for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:35.175738  340997 pod_ready.go:83] waiting for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:35.179790  340997 pod_ready.go:94] pod "etcd-embed-certs-756339" is "Ready"
	I1123 08:46:35.179808  340997 pod_ready.go:86] duration metric: took 4.047689ms for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:35.181849  340997 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:35.688207  340997 pod_ready.go:94] pod "kube-apiserver-embed-certs-756339" is "Ready"
	I1123 08:46:35.688240  340997 pod_ready.go:86] duration metric: took 506.369537ms for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:35.690480  340997 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:36.695578  340997 pod_ready.go:94] pod "kube-controller-manager-embed-certs-756339" is "Ready"
	I1123 08:46:36.695604  340997 pod_ready.go:86] duration metric: took 1.005104111s for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:36.771015  340997 pod_ready.go:83] waiting for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:37.170426  340997 pod_ready.go:94] pod "kube-proxy-npnsh" is "Ready"
	I1123 08:46:37.170450  340997 pod_ready.go:86] duration metric: took 399.414309ms for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:37.371202  340997 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:37.770286  340997 pod_ready.go:94] pod "kube-scheduler-embed-certs-756339" is "Ready"
	I1123 08:46:37.770311  340997 pod_ready.go:86] duration metric: took 399.084634ms for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:37.770322  340997 pod_ready.go:40] duration metric: took 15.105775579s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:46:37.811542  340997 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:46:37.812894  340997 out.go:179] * Done! kubectl is now configured to use "embed-certs-756339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:46:33 embed-certs-756339 crio[568]: time="2025-11-23T08:46:33.772315864Z" level=info msg="Created container 3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper" id=06d12ff3-8a9f-4789-8487-a7b246db87e3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:46:33 embed-certs-756339 crio[568]: time="2025-11-23T08:46:33.772814557Z" level=info msg="Starting container: 3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7" id=e8640961-2e91-49cd-9b2f-8455c3c9fc3c name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:46:33 embed-certs-756339 crio[568]: time="2025-11-23T08:46:33.774392543Z" level=info msg="Started container" PID=1633 containerID=3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper id=e8640961-2e91-49cd-9b2f-8455c3c9fc3c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8a5636aaf38911a1dab4ae0cf765cbbdce95abdbf16db58a10e206423f7a06f
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.235167227Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b0d9131d-288c-42ac-abe1-75cea9830988 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.237655687Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2cb93beb-b609-45c8-8ea3-9b1a22eaadfb name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.240222416Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper" id=c0879848-e289-454f-93aa-17e82bafebaa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.240343151Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.247580488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.248232193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.275304605Z" level=info msg="Created container 2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper" id=c0879848-e289-454f-93aa-17e82bafebaa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.27586203Z" level=info msg="Starting container: 2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0" id=91e0c3e3-a653-4f27-b221-e39abee81f62 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.277384303Z" level=info msg="Started container" PID=1644 containerID=2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper id=91e0c3e3-a653-4f27-b221-e39abee81f62 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8a5636aaf38911a1dab4ae0cf765cbbdce95abdbf16db58a10e206423f7a06f
	Nov 23 08:46:35 embed-certs-756339 crio[568]: time="2025-11-23T08:46:35.240505639Z" level=info msg="Removing container: 3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7" id=4103dd4e-cd0b-4bcb-be8f-631eedfe1494 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:46:35 embed-certs-756339 crio[568]: time="2025-11-23T08:46:35.251341583Z" level=info msg="Removed container 3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper" id=4103dd4e-cd0b-4bcb-be8f-631eedfe1494 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.971036694Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=9a4aa898-1428-4f99-bc79-6178ac0afff5 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.971663606Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f5c8ee92-381b-4e1b-9754-26dd8ac70f17 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.973165398Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1c4997d9-d05e-4405-89b7-944dc97328d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.976746791Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv/kubernetes-dashboard" id=b2699d5f-1e9e-4fce-92c7-ffd55e2aeee8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.976848367Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.980432796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.980598898Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/63d171f613f5e892d49d5c8703e16c9e259f373f11d466ffd1054ca5de136c56/merged/etc/group: no such file or directory"
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.980917748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:37 embed-certs-756339 crio[568]: time="2025-11-23T08:46:37.006366258Z" level=info msg="Created container 01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv/kubernetes-dashboard" id=b2699d5f-1e9e-4fce-92c7-ffd55e2aeee8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:46:37 embed-certs-756339 crio[568]: time="2025-11-23T08:46:37.006932211Z" level=info msg="Starting container: 01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a" id=866925e3-731b-4ef3-a856-7bf0709cc81a name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:46:37 embed-certs-756339 crio[568]: time="2025-11-23T08:46:37.008443686Z" level=info msg="Started container" PID=1696 containerID=01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv/kubernetes-dashboard id=866925e3-731b-4ef3-a856-7bf0709cc81a name=/runtime.v1.RuntimeService/StartContainer sandboxID=cdbb38a0bdf5b0e60729384c37f2cdb9aca04f829da50ac5108b34a070c1bbf9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	01ad288a6ce4d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   15 seconds ago      Running             kubernetes-dashboard        0                   cdbb38a0bdf5b       kubernetes-dashboard-855c9754f9-zs7hv        kubernetes-dashboard
	2b807150e95f4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   1                   e8a5636aaf389       dashboard-metrics-scraper-6ffb444bf9-fgk55   kubernetes-dashboard
	bbbf5811d98e5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           26 seconds ago      Running             coredns                     0                   7ce5ff51b870c       coredns-66bc5c9577-ffmn2                     kube-system
	be4e071c05335       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           26 seconds ago      Running             busybox                     1                   ead5272d793cf       busybox                                      default
	114c2a65428ab       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           30 seconds ago      Running             kube-proxy                  0                   038b49b3ae23c       kube-proxy-npnsh                             kube-system
	40a2d025621f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           30 seconds ago      Exited              storage-provisioner         0                   0b87ad4dec4ca       storage-provisioner                          kube-system
	667faaf0e8e58       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           30 seconds ago      Running             kindnet-cni                 0                   20a5291bafc14       kindnet-4hsx6                                kube-system
	fecd94a1c38bb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           33 seconds ago      Running             kube-controller-manager     0                   ed438f70ab71e       kube-controller-manager-embed-certs-756339   kube-system
	9fedd2b23bc11       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           33 seconds ago      Running             kube-scheduler              0                   469fe93b05de4       kube-scheduler-embed-certs-756339            kube-system
	4ff763a88033c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           33 seconds ago      Running             etcd                        0                   80533651da939       etcd-embed-certs-756339                      kube-system
	e5104273ac6da       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           33 seconds ago      Running             kube-apiserver              0                   0e3eb146a4bd0       kube-apiserver-embed-certs-756339            kube-system
	
	
	==> coredns [bbbf5811d98e599301ff4819a115c3d8ef0030269a4475f9e2870b48cf71a5a6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40087 - 57775 "HINFO IN 5637573327108728621.246995809119248048. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.059543582s
	
	
	==> describe nodes <==
	Name:               embed-certs-756339
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-756339
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=embed-certs-756339
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_45_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:45:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-756339
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:46:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:46:31 +0000   Sun, 23 Nov 2025 08:45:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:46:31 +0000   Sun, 23 Nov 2025 08:45:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:46:31 +0000   Sun, 23 Nov 2025 08:45:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:46:31 +0000   Sun, 23 Nov 2025 08:46:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-756339
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d012ad2e-0684-44d6-8937-6f0e3eaafce4
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 coredns-66bc5c9577-ffmn2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     81s
	  kube-system                 etcd-embed-certs-756339                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         88s
	  kube-system                 kindnet-4hsx6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      81s
	  kube-system                 kube-apiserver-embed-certs-756339             250m (3%)     0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-embed-certs-756339    200m (2%)     0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-npnsh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-scheduler-embed-certs-756339             100m (1%)     0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fgk55    0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zs7hv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  Starting                 30s                kube-proxy       
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node embed-certs-756339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node embed-certs-756339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s (x8 over 91s)  kubelet          Node embed-certs-756339 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    87s                kubelet          Node embed-certs-756339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  87s                kubelet          Node embed-certs-756339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     87s                kubelet          Node embed-certs-756339 status is now: NodeHasSufficientPID
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           82s                node-controller  Node embed-certs-756339 event: Registered Node embed-certs-756339 in Controller
	  Normal  NodeReady                70s                kubelet          Node embed-certs-756339 status is now: NodeReady
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s (x8 over 33s)  kubelet          Node embed-certs-756339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 33s)  kubelet          Node embed-certs-756339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x8 over 33s)  kubelet          Node embed-certs-756339 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node embed-certs-756339 event: Registered Node embed-certs-756339 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [4ff763a88033c7e27e70d73ddb7f7e5a0438c94735aa92da4b55a18a0ee6a230] <==
	{"level":"warn","ts":"2025-11-23T08:46:20.553419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.560794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.571184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.578357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.585056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.592072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.598278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.606900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.613525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.620484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.630888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.637521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.644321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.659587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.665905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.672451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.678306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.684598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.690421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.696908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.704296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.719190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.725094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.730907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.778272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33346","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:46:52 up  1:29,  0 user,  load average: 2.09, 3.29, 2.37
	Linux embed-certs-756339 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [667faaf0e8e58c57d611bef19454dabbf3702a12f04127678f55004f0d720ff5] <==
	I1123 08:46:22.656677       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:46:22.656918       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:46:22.657017       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:46:22.657032       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:46:22.657054       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:46:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:46:22.858673       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:46:22.859023       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:46:22.859065       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:46:22.859563       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:46:52.860167       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:46:52.860170       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:46:52.860167       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:46:52.860167       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	
	
	==> kube-apiserver [e5104273ac6da10ca351abb35f39ede96c9db87366edff5ea0c38cceb92ced59] <==
	I1123 08:46:21.220268       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:46:21.220282       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 08:46:21.220543       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 08:46:21.220341       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:46:21.220293       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:46:21.221030       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:46:21.221037       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:46:21.221043       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:46:21.220314       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 08:46:21.220325       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1123 08:46:21.225578       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:46:21.228000       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:46:21.253620       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:46:21.267983       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:46:21.443586       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:46:21.467647       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:46:21.482875       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:46:21.489159       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:46:21.494589       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:46:21.521717       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.131.251"}
	I1123 08:46:21.531361       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.49.200"}
	I1123 08:46:22.121851       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:46:24.755824       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:46:24.956335       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:46:25.006189       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fecd94a1c38bb8309076aa16021357b70119445d767154a26c2dff547a65ebbc] <==
	I1123 08:46:24.513355       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:46:24.513362       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:46:24.515014       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:46:24.551814       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:46:24.552831       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:46:24.552852       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:46:24.552864       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:46:24.553019       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:46:24.553032       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:46:24.553096       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:46:24.553124       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:46:24.553266       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:46:24.553399       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:46:24.553569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:46:24.554395       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:46:24.554500       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:46:24.555642       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:46:24.556804       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:46:24.557928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:46:24.559005       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:46:24.565268       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:46:24.566456       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:46:24.569722       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:46:24.575934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:46:34.479589       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [114c2a65428abda378156b9d44f78ab253febe754033d2ee3d3e166424ad8c09] <==
	I1123 08:46:22.548891       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:46:22.602378       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:46:22.703352       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:46:22.703387       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 08:46:22.703469       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:46:22.720115       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:46:22.720171       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:46:22.725172       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:46:22.725627       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:46:22.725657       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:46:22.727066       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:46:22.727085       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:46:22.727136       1 config.go:200] "Starting service config controller"
	I1123 08:46:22.727229       1 config.go:309] "Starting node config controller"
	I1123 08:46:22.727283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:46:22.727657       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:46:22.727681       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:46:22.727727       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:46:22.727218       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:46:22.828812       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:46:22.828841       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:46:22.828861       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9fedd2b23bc112a664b2a93370ec35729bf335846cb1900ce075ccb4249a78bc] <==
	I1123 08:46:20.564560       1 serving.go:386] Generated self-signed cert in-memory
	W1123 08:46:21.135537       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 08:46:21.135666       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:46:21.135680       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 08:46:21.135711       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 08:46:21.172573       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:46:21.172608       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:46:21.175353       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:46:21.175396       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:46:21.175872       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:46:21.176219       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:46:21.276597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:46:22 embed-certs-756339 kubelet[732]: E1123 08:46:22.962751     732 projected.go:196] Error preparing data for projected volume kube-api-access-wmslr for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 23 08:46:22 embed-certs-756339 kubelet[732]: E1123 08:46:22.962816     732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d266def-c91d-4fd0-b04a-42a6fd90082f-kube-api-access-wmslr podName:9d266def-c91d-4fd0-b04a-42a6fd90082f nodeName:}" failed. No retries permitted until 2025-11-23 08:46:23.962797235 +0000 UTC m=+4.858298441 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wmslr" (UniqueName: "kubernetes.io/projected/9d266def-c91d-4fd0-b04a-42a6fd90082f-kube-api-access-wmslr") pod "busybox" (UID: "9d266def-c91d-4fd0-b04a-42a6fd90082f") : object "default"/"kube-root-ca.crt" not registered
	Nov 23 08:46:23 embed-certs-756339 kubelet[732]: E1123 08:46:23.867779     732 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 23 08:46:23 embed-certs-756339 kubelet[732]: E1123 08:46:23.867860     732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de386500-381b-43aa-9998-52ac07eb6db3-config-volume podName:de386500-381b-43aa-9998-52ac07eb6db3 nodeName:}" failed. No retries permitted until 2025-11-23 08:46:25.867846273 +0000 UTC m=+6.763347467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/de386500-381b-43aa-9998-52ac07eb6db3-config-volume") pod "coredns-66bc5c9577-ffmn2" (UID: "de386500-381b-43aa-9998-52ac07eb6db3") : object "kube-system"/"coredns" not registered
	Nov 23 08:46:23 embed-certs-756339 kubelet[732]: E1123 08:46:23.968217     732 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 23 08:46:23 embed-certs-756339 kubelet[732]: E1123 08:46:23.968255     732 projected.go:196] Error preparing data for projected volume kube-api-access-wmslr for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 23 08:46:23 embed-certs-756339 kubelet[732]: E1123 08:46:23.968334     732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d266def-c91d-4fd0-b04a-42a6fd90082f-kube-api-access-wmslr podName:9d266def-c91d-4fd0-b04a-42a6fd90082f nodeName:}" failed. No retries permitted until 2025-11-23 08:46:25.968314403 +0000 UTC m=+6.863815612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wmslr" (UniqueName: "kubernetes.io/projected/9d266def-c91d-4fd0-b04a-42a6fd90082f-kube-api-access-wmslr") pod "busybox" (UID: "9d266def-c91d-4fd0-b04a-42a6fd90082f") : object "default"/"kube-root-ca.crt" not registered
	Nov 23 08:46:31 embed-certs-756339 kubelet[732]: I1123 08:46:31.711887     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llnsh\" (UniqueName: \"kubernetes.io/projected/068df84f-d0fd-4037-a87f-270fb7ce8b9c-kube-api-access-llnsh\") pod \"kubernetes-dashboard-855c9754f9-zs7hv\" (UID: \"068df84f-d0fd-4037-a87f-270fb7ce8b9c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv"
	Nov 23 08:46:31 embed-certs-756339 kubelet[732]: I1123 08:46:31.711933     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzthm\" (UniqueName: \"kubernetes.io/projected/f6240862-cccf-46a2-8a05-6679b6cd3746-kube-api-access-wzthm\") pod \"dashboard-metrics-scraper-6ffb444bf9-fgk55\" (UID: \"f6240862-cccf-46a2-8a05-6679b6cd3746\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55"
	Nov 23 08:46:31 embed-certs-756339 kubelet[732]: I1123 08:46:31.711951     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6240862-cccf-46a2-8a05-6679b6cd3746-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fgk55\" (UID: \"f6240862-cccf-46a2-8a05-6679b6cd3746\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55"
	Nov 23 08:46:31 embed-certs-756339 kubelet[732]: I1123 08:46:31.711974     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/068df84f-d0fd-4037-a87f-270fb7ce8b9c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-zs7hv\" (UID: \"068df84f-d0fd-4037-a87f-270fb7ce8b9c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv"
	Nov 23 08:46:34 embed-certs-756339 kubelet[732]: I1123 08:46:34.234774     732 scope.go:117] "RemoveContainer" containerID="3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7"
	Nov 23 08:46:34 embed-certs-756339 kubelet[732]: I1123 08:46:34.719195     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 08:46:35 embed-certs-756339 kubelet[732]: I1123 08:46:35.239057     732 scope.go:117] "RemoveContainer" containerID="3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7"
	Nov 23 08:46:35 embed-certs-756339 kubelet[732]: I1123 08:46:35.239157     732 scope.go:117] "RemoveContainer" containerID="2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0"
	Nov 23 08:46:35 embed-certs-756339 kubelet[732]: E1123 08:46:35.239367     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgk55_kubernetes-dashboard(f6240862-cccf-46a2-8a05-6679b6cd3746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55" podUID="f6240862-cccf-46a2-8a05-6679b6cd3746"
	Nov 23 08:46:36 embed-certs-756339 kubelet[732]: I1123 08:46:36.244355     732 scope.go:117] "RemoveContainer" containerID="2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0"
	Nov 23 08:46:36 embed-certs-756339 kubelet[732]: E1123 08:46:36.244540     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgk55_kubernetes-dashboard(f6240862-cccf-46a2-8a05-6679b6cd3746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55" podUID="f6240862-cccf-46a2-8a05-6679b6cd3746"
	Nov 23 08:46:37 embed-certs-756339 kubelet[732]: I1123 08:46:37.257183     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv" podStartSLOduration=7.158908334 podStartE2EDuration="12.257162324s" podCreationTimestamp="2025-11-23 08:46:25 +0000 UTC" firstStartedPulling="2025-11-23 08:46:31.874390862 +0000 UTC m=+12.769892060" lastFinishedPulling="2025-11-23 08:46:36.972644835 +0000 UTC m=+17.868146050" observedRunningTime="2025-11-23 08:46:37.257113026 +0000 UTC m=+18.152614260" watchObservedRunningTime="2025-11-23 08:46:37.257162324 +0000 UTC m=+18.152663535"
	Nov 23 08:46:41 embed-certs-756339 kubelet[732]: I1123 08:46:41.854035     732 scope.go:117] "RemoveContainer" containerID="2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0"
	Nov 23 08:46:41 embed-certs-756339 kubelet[732]: E1123 08:46:41.854212     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgk55_kubernetes-dashboard(f6240862-cccf-46a2-8a05-6679b6cd3746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55" podUID="f6240862-cccf-46a2-8a05-6679b6cd3746"
	Nov 23 08:46:49 embed-certs-756339 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:46:49 embed-certs-756339 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:46:49 embed-certs-756339 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 08:46:49 embed-certs-756339 systemd[1]: kubelet.service: Consumed 1.039s CPU time.
	
	
	==> kubernetes-dashboard [01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a] <==
	2025/11/23 08:46:37 Starting overwatch
	2025/11/23 08:46:37 Using namespace: kubernetes-dashboard
	2025/11/23 08:46:37 Using in-cluster config to connect to apiserver
	2025/11/23 08:46:37 Using secret token for csrf signing
	2025/11/23 08:46:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:46:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:46:37 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:46:37 Generating JWE encryption key
	2025/11/23 08:46:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:46:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:46:37 Initializing JWE encryption key from synchronized object
	2025/11/23 08:46:37 Creating in-cluster Sidecar client
	2025/11/23 08:46:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:46:37 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [40a2d025621f3f6b23ef4784628f70db522cb9678d3dc68a626456ef60906012] <==
	I1123 08:46:22.525332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:46:52.528065       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756339 -n embed-certs-756339
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756339 -n embed-certs-756339: exit status 2 (318.882667ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-756339 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-756339
helpers_test.go:243: (dbg) docker inspect embed-certs-756339:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f",
	        "Created": "2025-11-23T08:45:07.242299769Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 341199,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:46:13.114614168Z",
	            "FinishedAt": "2025-11-23T08:46:12.325342638Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/hosts",
	        "LogPath": "/var/lib/docker/containers/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f/dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f-json.log",
	        "Name": "/embed-certs-756339",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-756339:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-756339",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dcc19a70aae1ff020a375eef6c4774d69923f4fab0e7a1a3debf53decb013e6f",
	                "LowerDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083-init/diff:/var/lib/docker/overlay2/937e3c0d464b96e85545cf013f57ea7510f77eb772694f8d746912e892dbad8f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/73bb214aa2c3ba8e871739b33264216b04e59cc2f3b5a62a6452066f65520083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-756339",
	                "Source": "/var/lib/docker/volumes/embed-certs-756339/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-756339",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-756339",
	                "name.minikube.sigs.k8s.io": "embed-certs-756339",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f0f4b2562930a14e63683bedbd410bc6d874fe623ce5e52b80ca1269ebf0b5c4",
	            "SandboxKey": "/var/run/docker/netns/f0f4b2562930",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-756339": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "081a90797f7b1b1fb1a39e8b587fd717235565d36ed01f430e48a85f0e009f66",
	                    "EndpointID": "b4cfc92e522c83babad2b52e908ac798eb73d475355e2d8ca98b97fb33b4d1fb",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "2e:fe:71:bf:75:01",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-756339",
	                        "dcc19a70aae1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756339 -n embed-certs-756339
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756339 -n embed-certs-756339: exit status 2 (303.887612ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-756339 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ newest-cni-653361 image list --format=json                                                                                                              │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:44 UTC │
	│ pause   │ -p newest-cni-653361 --alsologtostderr -v=1                                                                                                             │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │                     │
	│ delete  │ -p newest-cni-653361                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:44 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p newest-cni-653361                                                                                                                                    │ newest-cni-653361            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p disable-driver-mounts-177890                                                                                                                         │ disable-driver-mounts-177890 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ start   │ -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1  │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ old-k8s-version-057894 image list --format=json                                                                                                         │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p old-k8s-version-057894 --alsologtostderr -v=1                                                                                                        │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ delete  │ -p old-k8s-version-057894                                                                                                                               │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ delete  │ -p old-k8s-version-057894                                                                                                                               │ old-k8s-version-057894       │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ image   │ default-k8s-diff-port-726261 image list --format=json                                                                                                   │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p default-k8s-diff-port-726261 --alsologtostderr -v=1                                                                                                  │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ image   │ no-preload-187607 image list --format=json                                                                                                              │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:45 UTC │
	│ pause   │ -p no-preload-187607 --alsologtostderr -v=1                                                                                                             │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-756339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │                     │
	│ stop    │ -p embed-certs-756339 --alsologtostderr -v=3                                                                                                            │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p default-k8s-diff-port-726261                                                                                                                         │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p no-preload-187607                                                                                                                                    │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:45 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p default-k8s-diff-port-726261                                                                                                                         │ default-k8s-diff-port-726261 │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ delete  │ -p no-preload-187607                                                                                                                                    │ no-preload-187607            │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ addons  │ enable dashboard -p embed-certs-756339 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                           │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ start   │ -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1  │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ image   │ embed-certs-756339 image list --format=json                                                                                                             │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │ 23 Nov 25 08:46 UTC │
	│ pause   │ -p embed-certs-756339 --alsologtostderr -v=1                                                                                                            │ embed-certs-756339           │ jenkins │ v1.37.0 │ 23 Nov 25 08:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:46:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:46:12.901978  340997 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:46:12.902225  340997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:12.902234  340997 out.go:374] Setting ErrFile to fd 2...
	I1123 08:46:12.902238  340997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:12.902407  340997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:46:12.902818  340997 out.go:368] Setting JSON to false
	I1123 08:46:12.903660  340997 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5320,"bootTime":1763882253,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:46:12.903721  340997 start.go:143] virtualization: kvm guest
	I1123 08:46:12.905664  340997 out.go:179] * [embed-certs-756339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:46:12.907079  340997 notify.go:221] Checking for updates...
	I1123 08:46:12.907094  340997 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:46:12.908152  340997 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:46:12.909235  340997 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:46:12.910245  340997 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:46:12.911279  340997 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:46:12.912251  340997 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:46:12.913722  340997 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:46:12.914183  340997 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:46:12.936695  340997 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:46:12.936778  340997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:12.991437  340997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-23 08:46:12.982256299 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:12.991532  340997 docker.go:319] overlay module found
	I1123 08:46:12.993139  340997 out.go:179] * Using the docker driver based on existing profile
	I1123 08:46:12.994335  340997 start.go:309] selected driver: docker
	I1123 08:46:12.994347  340997 start.go:927] validating driver "docker" against &{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:12.994423  340997 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:46:12.995005  340997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:46:13.047993  340997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-23 08:46:13.038713525 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:46:13.048270  340997 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:46:13.048298  340997 cni.go:84] Creating CNI manager for ""
	I1123 08:46:13.048348  340997 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:46:13.048388  340997 start.go:353] cluster config:
	{Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:13.050048  340997 out.go:179] * Starting "embed-certs-756339" primary control-plane node in "embed-certs-756339" cluster
	I1123 08:46:13.051091  340997 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:46:13.052135  340997 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:46:13.053175  340997 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:46:13.053207  340997 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:46:13.053217  340997 cache.go:65] Caching tarball of preloaded images
	I1123 08:46:13.053244  340997 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:46:13.053300  340997 preload.go:238] Found /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:46:13.053314  340997 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:46:13.053423  340997 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:46:13.072755  340997 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:46:13.072770  340997 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:46:13.072785  340997 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:46:13.072817  340997 start.go:360] acquireMachinesLock for embed-certs-756339: {Name:mk2607c5ea38ca6bd330e0a548b36202f67f84a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:46:13.072885  340997 start.go:364] duration metric: took 38.187µs to acquireMachinesLock for "embed-certs-756339"
	I1123 08:46:13.072906  340997 start.go:96] Skipping create...Using existing machine configuration
	I1123 08:46:13.072915  340997 fix.go:54] fixHost starting: 
	I1123 08:46:13.073130  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:13.089147  340997 fix.go:112] recreateIfNeeded on embed-certs-756339: state=Stopped err=<nil>
	W1123 08:46:13.089179  340997 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 08:46:13.090669  340997 out.go:252] * Restarting existing docker container for "embed-certs-756339" ...
	I1123 08:46:13.090746  340997 cli_runner.go:164] Run: docker start embed-certs-756339
	I1123 08:46:13.347661  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:13.365299  340997 kic.go:430] container "embed-certs-756339" state is running.
	I1123 08:46:13.365726  340997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:46:13.382955  340997 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/config.json ...
	I1123 08:46:13.383157  340997 machine.go:94] provisionDockerMachine start ...
	I1123 08:46:13.383243  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:13.400993  340997 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:13.401268  340997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I1123 08:46:13.401284  340997 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:46:13.401993  340997 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50720->127.0.0.1:33136: read: connection reset by peer
	I1123 08:46:16.543102  340997 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:46:16.543137  340997 ubuntu.go:182] provisioning hostname "embed-certs-756339"
	I1123 08:46:16.543217  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:16.560360  340997 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:16.560584  340997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I1123 08:46:16.560601  340997 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-756339 && echo "embed-certs-756339" | sudo tee /etc/hostname
	I1123 08:46:16.707400  340997 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-756339
	
	I1123 08:46:16.707471  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:16.724835  340997 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:16.725052  340997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I1123 08:46:16.725075  340997 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-756339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-756339/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-756339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:46:16.863396  340997 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:46:16.863423  340997 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21966-10964/.minikube CaCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21966-10964/.minikube}
	I1123 08:46:16.863438  340997 ubuntu.go:190] setting up certificates
	I1123 08:46:16.863454  340997 provision.go:84] configureAuth start
	I1123 08:46:16.863517  340997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:46:16.880826  340997 provision.go:143] copyHostCerts
	I1123 08:46:16.880886  340997 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem, removing ...
	I1123 08:46:16.880903  340997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem
	I1123 08:46:16.880968  340997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/ca.pem (1078 bytes)
	I1123 08:46:16.881060  340997 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem, removing ...
	I1123 08:46:16.881096  340997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem
	I1123 08:46:16.881127  340997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/cert.pem (1123 bytes)
	I1123 08:46:16.881187  340997 exec_runner.go:144] found /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem, removing ...
	I1123 08:46:16.881202  340997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem
	I1123 08:46:16.881233  340997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21966-10964/.minikube/key.pem (1679 bytes)
	I1123 08:46:16.881281  340997 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem org=jenkins.embed-certs-756339 san=[127.0.0.1 192.168.103.2 embed-certs-756339 localhost minikube]
	I1123 08:46:17.077587  340997 provision.go:177] copyRemoteCerts
	I1123 08:46:17.077645  340997 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:46:17.077677  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.095052  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:17.194032  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:46:17.209980  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 08:46:17.225913  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:46:17.241378  340997 provision.go:87] duration metric: took 377.915171ms to configureAuth
	I1123 08:46:17.241402  340997 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:46:17.241626  340997 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:46:17.241760  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.259214  340997 main.go:143] libmachine: Using SSH client type: native
	I1123 08:46:17.259443  340997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33136 <nil> <nil>}
	I1123 08:46:17.259461  340997 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:46:17.567557  340997 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:46:17.567580  340997 machine.go:97] duration metric: took 4.184402014s to provisionDockerMachine
	I1123 08:46:17.567594  340997 start.go:293] postStartSetup for "embed-certs-756339" (driver="docker")
	I1123 08:46:17.567606  340997 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:46:17.567658  340997 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:46:17.567735  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.586353  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:17.685006  340997 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:46:17.688100  340997 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:46:17.688129  340997 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:46:17.688139  340997 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/addons for local assets ...
	I1123 08:46:17.688181  340997 filesync.go:126] Scanning /home/jenkins/minikube-integration/21966-10964/.minikube/files for local assets ...
	I1123 08:46:17.688248  340997 filesync.go:149] local asset: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem -> 144882.pem in /etc/ssl/certs
	I1123 08:46:17.688336  340997 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:46:17.695279  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:46:17.710930  340997 start.go:296] duration metric: took 143.32384ms for postStartSetup
	I1123 08:46:17.710989  340997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:46:17.711055  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.728089  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:17.822936  340997 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:46:17.827093  340997 fix.go:56] duration metric: took 4.754171713s for fixHost
	I1123 08:46:17.827116  340997 start.go:83] releasing machines lock for "embed-certs-756339", held for 4.754217721s
	I1123 08:46:17.827178  340997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-756339
	I1123 08:46:17.845055  340997 ssh_runner.go:195] Run: cat /version.json
	I1123 08:46:17.845115  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.845158  340997 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:46:17.845228  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:17.862337  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:17.862680  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:17.957636  340997 ssh_runner.go:195] Run: systemctl --version
	I1123 08:46:18.011929  340997 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:46:18.043580  340997 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:46:18.047789  340997 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:46:18.047841  340997 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:46:18.055036  340997 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 08:46:18.055051  340997 start.go:496] detecting cgroup driver to use...
	I1123 08:46:18.055075  340997 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:46:18.055115  340997 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:46:18.068728  340997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:46:18.079363  340997 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:46:18.079399  340997 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:46:18.091759  340997 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:46:18.102275  340997 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:46:18.176082  340997 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:46:18.252373  340997 docker.go:234] disabling docker service ...
	I1123 08:46:18.252443  340997 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:46:18.265014  340997 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:46:18.276152  340997 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:46:18.350567  340997 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:46:18.427370  340997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:46:18.439133  340997 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:46:18.452221  340997 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:46:18.452263  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.460219  340997 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:46:18.460267  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.468113  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.475725  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.483431  340997 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:46:18.490536  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.498413  340997 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.505801  340997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:46:18.513378  340997 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:46:18.520052  340997 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:46:18.526551  340997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:18.601144  340997 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:46:18.732615  340997 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:46:18.732676  340997 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:46:18.736327  340997 start.go:564] Will wait 60s for crictl version
	I1123 08:46:18.736373  340997 ssh_runner.go:195] Run: which crictl
	I1123 08:46:18.739678  340997 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:46:18.761490  340997 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:46:18.761555  340997 ssh_runner.go:195] Run: crio --version
	I1123 08:46:18.786991  340997 ssh_runner.go:195] Run: crio --version
	I1123 08:46:18.813897  340997 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:46:18.814827  340997 cli_runner.go:164] Run: docker network inspect embed-certs-756339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:46:18.831725  340997 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 08:46:18.835472  340997 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:46:18.845241  340997 kubeadm.go:884] updating cluster {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:46:18.845350  340997 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:46:18.845392  340997 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:46:18.877554  340997 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:46:18.877572  340997 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:46:18.877613  340997 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:46:18.899912  340997 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:46:18.899933  340997 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:46:18.899942  340997 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1123 08:46:18.900046  340997 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-756339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:46:18.900102  340997 ssh_runner.go:195] Run: crio config
	I1123 08:46:18.942299  340997 cni.go:84] Creating CNI manager for ""
	I1123 08:46:18.942315  340997 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:46:18.942329  340997 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:46:18.942348  340997 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-756339 NodeName:embed-certs-756339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:46:18.942473  340997 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-756339"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:46:18.942521  340997 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:46:18.949776  340997 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:46:18.949819  340997 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:46:18.956734  340997 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 08:46:18.968094  340997 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:46:18.979264  340997 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1123 08:46:18.990260  340997 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:46:18.993443  340997 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:46:19.002330  340997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:19.079500  340997 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:46:19.102523  340997 certs.go:69] Setting up /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339 for IP: 192.168.103.2
	I1123 08:46:19.102541  340997 certs.go:195] generating shared ca certs ...
	I1123 08:46:19.102557  340997 certs.go:227] acquiring lock for ca certs: {Name:mkd2d42a4f99170549efb6c2f6cddff48f3438bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:19.102709  340997 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key
	I1123 08:46:19.102769  340997 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key
	I1123 08:46:19.102784  340997 certs.go:257] generating profile certs ...
	I1123 08:46:19.102901  340997 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/client.key
	I1123 08:46:19.102972  340997 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key.11e0c354
	I1123 08:46:19.103028  340997 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key
	I1123 08:46:19.103176  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem (1338 bytes)
	W1123 08:46:19.103222  340997 certs.go:480] ignoring /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488_empty.pem, impossibly tiny 0 bytes
	I1123 08:46:19.103237  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:46:19.103274  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:46:19.103309  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:46:19.103345  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/certs/key.pem (1679 bytes)
	I1123 08:46:19.103403  340997 certs.go:484] found cert: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem (1708 bytes)
	I1123 08:46:19.104130  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:46:19.120712  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:46:19.137600  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:46:19.154548  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1123 08:46:19.174779  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:46:19.192961  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:46:19.208896  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:46:19.224356  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/embed-certs-756339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:46:19.239792  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:46:19.255286  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/certs/14488.pem --> /usr/share/ca-certificates/14488.pem (1338 bytes)
	I1123 08:46:19.270992  340997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/ssl/certs/144882.pem --> /usr/share/ca-certificates/144882.pem (1708 bytes)
	I1123 08:46:19.287614  340997 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:46:19.299010  340997 ssh_runner.go:195] Run: openssl version
	I1123 08:46:19.304451  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:46:19.311897  340997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:19.315017  340997 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 07:55 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:19.315054  340997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:46:19.348176  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:46:19.354957  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14488.pem && ln -fs /usr/share/ca-certificates/14488.pem /etc/ssl/certs/14488.pem"
	I1123 08:46:19.362275  340997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14488.pem
	I1123 08:46:19.365623  340997 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:01 /usr/share/ca-certificates/14488.pem
	I1123 08:46:19.365658  340997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14488.pem
	I1123 08:46:19.398414  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14488.pem /etc/ssl/certs/51391683.0"
	I1123 08:46:19.405243  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144882.pem && ln -fs /usr/share/ca-certificates/144882.pem /etc/ssl/certs/144882.pem"
	I1123 08:46:19.412673  340997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144882.pem
	I1123 08:46:19.415933  340997 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:01 /usr/share/ca-certificates/144882.pem
	I1123 08:46:19.415965  340997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144882.pem
	I1123 08:46:19.448374  340997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/144882.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:46:19.455326  340997 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:46:19.458664  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 08:46:19.492615  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 08:46:19.525027  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 08:46:19.557519  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 08:46:19.591434  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 08:46:19.632193  340997 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 08:46:19.681257  340997 kubeadm.go:401] StartCluster: {Name:embed-certs-756339 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-756339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:46:19.681371  340997 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:46:19.681461  340997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:46:19.716956  340997 cri.go:89] found id: "fecd94a1c38bb8309076aa16021357b70119445d767154a26c2dff547a65ebbc"
	I1123 08:46:19.717001  340997 cri.go:89] found id: "9fedd2b23bc112a664b2a93370ec35729bf335846cb1900ce075ccb4249a78bc"
	I1123 08:46:19.717008  340997 cri.go:89] found id: "4ff763a88033c7e27e70d73ddb7f7e5a0438c94735aa92da4b55a18a0ee6a230"
	I1123 08:46:19.717013  340997 cri.go:89] found id: "e5104273ac6da10ca351abb35f39ede96c9db87366edff5ea0c38cceb92ced59"
	I1123 08:46:19.717017  340997 cri.go:89] found id: ""
	I1123 08:46:19.717067  340997 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 08:46:19.732189  340997 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:46:19Z" level=error msg="open /run/runc: no such file or directory"
	I1123 08:46:19.732264  340997 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:46:19.742034  340997 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 08:46:19.742051  340997 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 08:46:19.742107  340997 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 08:46:19.749351  340997 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:46:19.749799  340997 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-756339" does not appear in /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:46:19.749905  340997 kubeconfig.go:62] /home/jenkins/minikube-integration/21966-10964/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-756339" cluster setting kubeconfig missing "embed-certs-756339" context setting]
	I1123 08:46:19.750131  340997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:19.751281  340997 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 08:46:19.758432  340997 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1123 08:46:19.758465  340997 kubeadm.go:602] duration metric: took 16.40727ms to restartPrimaryControlPlane
	I1123 08:46:19.758478  340997 kubeadm.go:403] duration metric: took 77.234552ms to StartCluster
	I1123 08:46:19.758496  340997 settings.go:142] acquiring lock: {Name:mk95cee866c77966748d0cb440303c0a23989ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:19.758559  340997 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:46:19.759286  340997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21966-10964/kubeconfig: {Name:mk4a3ed3482bc4c98029eff4616a04fcb5ac49be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:46:19.759501  340997 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:46:19.759557  340997 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:46:19.759639  340997 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-756339"
	I1123 08:46:19.759655  340997 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-756339"
	W1123 08:46:19.759661  340997 addons.go:248] addon storage-provisioner should already be in state true
	I1123 08:46:19.759681  340997 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:46:19.759691  340997 addons.go:70] Setting dashboard=true in profile "embed-certs-756339"
	I1123 08:46:19.759711  340997 addons.go:239] Setting addon dashboard=true in "embed-certs-756339"
	W1123 08:46:19.759720  340997 addons.go:248] addon dashboard should already be in state true
	I1123 08:46:19.759731  340997 addons.go:70] Setting default-storageclass=true in profile "embed-certs-756339"
	I1123 08:46:19.759751  340997 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:46:19.759753  340997 config.go:182] Loaded profile config "embed-certs-756339": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:46:19.759759  340997 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-756339"
	I1123 08:46:19.760065  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:19.760123  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:19.760305  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:19.761223  340997 out.go:179] * Verifying Kubernetes components...
	I1123 08:46:19.762405  340997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:46:19.785674  340997 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 08:46:19.786594  340997 addons.go:239] Setting addon default-storageclass=true in "embed-certs-756339"
	W1123 08:46:19.786615  340997 addons.go:248] addon default-storageclass should already be in state true
	I1123 08:46:19.786640  340997 host.go:66] Checking if "embed-certs-756339" exists ...
	I1123 08:46:19.786717  340997 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:46:19.787140  340997 cli_runner.go:164] Run: docker container inspect embed-certs-756339 --format={{.State.Status}}
	I1123 08:46:19.787631  340997 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 08:46:19.787716  340997 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:46:19.787737  340997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:46:19.787784  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:19.788465  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 08:46:19.788478  340997 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 08:46:19.788521  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:19.818114  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:19.819304  340997 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:46:19.819328  340997 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:46:19.819398  340997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756339
	I1123 08:46:19.822764  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:19.842872  340997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33136 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/embed-certs-756339/id_rsa Username:docker}
	I1123 08:46:19.905366  340997 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:46:19.918039  340997 node_ready.go:35] waiting up to 6m0s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:46:19.934162  340997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:46:19.938347  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 08:46:19.938366  340997 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 08:46:19.953067  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 08:46:19.953081  340997 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 08:46:19.956161  340997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:46:19.968408  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 08:46:19.968423  340997 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 08:46:19.982658  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 08:46:19.982673  340997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 08:46:19.995163  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 08:46:19.995305  340997 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 08:46:20.007971  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 08:46:20.007987  340997 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 08:46:20.020578  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 08:46:20.020615  340997 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 08:46:20.034464  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 08:46:20.034481  340997 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 08:46:20.049019  340997 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:46:20.049034  340997 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 08:46:20.060459  340997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 08:46:21.160673  340997 node_ready.go:49] node "embed-certs-756339" is "Ready"
	I1123 08:46:21.160722  340997 node_ready.go:38] duration metric: took 1.242651635s for node "embed-certs-756339" to be "Ready" ...
	I1123 08:46:21.160739  340997 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:46:21.160796  340997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:46:21.630247  340997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.696054158s)
	I1123 08:46:21.630307  340997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.674112649s)
	I1123 08:46:21.630388  340997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.569905342s)
	I1123 08:46:21.630446  340997 api_server.go:72] duration metric: took 1.870911862s to wait for apiserver process to appear ...
	I1123 08:46:21.630544  340997 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:46:21.630564  340997 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:46:21.635119  340997 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-756339 addons enable metrics-server
	
	I1123 08:46:21.637098  340997 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:46:21.637120  340997 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:46:21.642667  340997 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 08:46:21.643646  340997 addons.go:530] duration metric: took 1.884096148s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 08:46:22.131207  340997 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:46:22.135178  340997 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 08:46:22.135221  340997 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 08:46:22.630838  340997 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 08:46:22.635447  340997 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 08:46:22.636270  340997 api_server.go:141] control plane version: v1.34.1
	I1123 08:46:22.636294  340997 api_server.go:131] duration metric: took 1.005743595s to wait for apiserver health ...
	I1123 08:46:22.636304  340997 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:46:22.639251  340997 system_pods.go:59] 8 kube-system pods found
	I1123 08:46:22.639277  340997 system_pods.go:61] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:46:22.639284  340997 system_pods.go:61] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:46:22.639292  340997 system_pods.go:61] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:46:22.639298  340997 system_pods.go:61] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:46:22.639303  340997 system_pods.go:61] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:46:22.639308  340997 system_pods.go:61] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:46:22.639313  340997 system_pods.go:61] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:46:22.639318  340997 system_pods.go:61] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:46:22.639327  340997 system_pods.go:74] duration metric: took 3.016997ms to wait for pod list to return data ...
	I1123 08:46:22.639333  340997 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:46:22.641350  340997 default_sa.go:45] found service account: "default"
	I1123 08:46:22.641367  340997 default_sa.go:55] duration metric: took 2.02915ms for default service account to be created ...
	I1123 08:46:22.641374  340997 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:46:22.645584  340997 system_pods.go:86] 8 kube-system pods found
	I1123 08:46:22.645610  340997 system_pods.go:89] "coredns-66bc5c9577-ffmn2" [de386500-381b-43aa-9998-52ac07eb6db3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:46:22.645617  340997 system_pods.go:89] "etcd-embed-certs-756339" [bcde0f5d-8b6a-4fa8-8fd2-208fc7810a0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 08:46:22.645624  340997 system_pods.go:89] "kindnet-4hsx6" [98980dc0-c70d-4cf6-99cc-54bd34fbaa83] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 08:46:22.645629  340997 system_pods.go:89] "kube-apiserver-embed-certs-756339" [ae6c3e63-ee1e-4525-9f7a-329ac52a644d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 08:46:22.645634  340997 system_pods.go:89] "kube-controller-manager-embed-certs-756339" [be353794-c167-4ff0-be75-33d2938dcdde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 08:46:22.645643  340997 system_pods.go:89] "kube-proxy-npnsh" [ccaada88-aacd-436c-904f-d29f991dd2e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 08:46:22.645647  340997 system_pods.go:89] "kube-scheduler-embed-certs-756339" [3ee0dbf0-ea8d-4ba3-9418-0dc1f2f6f9a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 08:46:22.645654  340997 system_pods.go:89] "storage-provisioner" [ace09d0d-f2aa-4b6a-960e-1f660821a68b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:46:22.645661  340997 system_pods.go:126] duration metric: took 4.281367ms to wait for k8s-apps to be running ...
	I1123 08:46:22.645669  340997 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:46:22.645720  340997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:46:22.658417  340997 system_svc.go:56] duration metric: took 12.742319ms WaitForService to wait for kubelet
	I1123 08:46:22.658442  340997 kubeadm.go:587] duration metric: took 2.898909117s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:46:22.658463  340997 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:46:22.660710  340997 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:46:22.660735  340997 node_conditions.go:123] node cpu capacity is 8
	I1123 08:46:22.660759  340997 node_conditions.go:105] duration metric: took 2.282592ms to run NodePressure ...
	I1123 08:46:22.660777  340997 start.go:242] waiting for startup goroutines ...
	I1123 08:46:22.660791  340997 start.go:247] waiting for cluster config update ...
	I1123 08:46:22.660806  340997 start.go:256] writing updated cluster config ...
	I1123 08:46:22.661057  340997 ssh_runner.go:195] Run: rm -f paused
	I1123 08:46:22.664524  340997 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:46:22.667090  340997 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 08:46:24.673192  340997 pod_ready.go:104] pod "coredns-66bc5c9577-ffmn2" is not "Ready", error: node "embed-certs-756339" hosting pod "coredns-66bc5c9577-ffmn2" is not "Ready" (will retry)
	W1123 08:46:27.172047  340997 pod_ready.go:104] pod "coredns-66bc5c9577-ffmn2" is not "Ready", error: node "embed-certs-756339" hosting pod "coredns-66bc5c9577-ffmn2" is not "Ready" (will retry)
	W1123 08:46:29.671966  340997 pod_ready.go:104] pod "coredns-66bc5c9577-ffmn2" is not "Ready", error: node "embed-certs-756339" hosting pod "coredns-66bc5c9577-ffmn2" is not "Ready" (will retry)
	W1123 08:46:32.173085  340997 pod_ready.go:104] pod "coredns-66bc5c9577-ffmn2" is not "Ready", error: <nil>
	W1123 08:46:34.673334  340997 pod_ready.go:104] pod "coredns-66bc5c9577-ffmn2" is not "Ready", error: <nil>
	I1123 08:46:35.173129  340997 pod_ready.go:94] pod "coredns-66bc5c9577-ffmn2" is "Ready"
	I1123 08:46:35.173158  340997 pod_ready.go:86] duration metric: took 12.506047472s for pod "coredns-66bc5c9577-ffmn2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:35.175738  340997 pod_ready.go:83] waiting for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:35.179790  340997 pod_ready.go:94] pod "etcd-embed-certs-756339" is "Ready"
	I1123 08:46:35.179808  340997 pod_ready.go:86] duration metric: took 4.047689ms for pod "etcd-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:35.181849  340997 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:35.688207  340997 pod_ready.go:94] pod "kube-apiserver-embed-certs-756339" is "Ready"
	I1123 08:46:35.688240  340997 pod_ready.go:86] duration metric: took 506.369537ms for pod "kube-apiserver-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:35.690480  340997 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:36.695578  340997 pod_ready.go:94] pod "kube-controller-manager-embed-certs-756339" is "Ready"
	I1123 08:46:36.695604  340997 pod_ready.go:86] duration metric: took 1.005104111s for pod "kube-controller-manager-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:36.771015  340997 pod_ready.go:83] waiting for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:37.170426  340997 pod_ready.go:94] pod "kube-proxy-npnsh" is "Ready"
	I1123 08:46:37.170450  340997 pod_ready.go:86] duration metric: took 399.414309ms for pod "kube-proxy-npnsh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:37.371202  340997 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:37.770286  340997 pod_ready.go:94] pod "kube-scheduler-embed-certs-756339" is "Ready"
	I1123 08:46:37.770311  340997 pod_ready.go:86] duration metric: took 399.084634ms for pod "kube-scheduler-embed-certs-756339" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:46:37.770322  340997 pod_ready.go:40] duration metric: took 15.105775579s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:46:37.811542  340997 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:46:37.812894  340997 out.go:179] * Done! kubectl is now configured to use "embed-certs-756339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:46:33 embed-certs-756339 crio[568]: time="2025-11-23T08:46:33.772315864Z" level=info msg="Created container 3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper" id=06d12ff3-8a9f-4789-8487-a7b246db87e3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:46:33 embed-certs-756339 crio[568]: time="2025-11-23T08:46:33.772814557Z" level=info msg="Starting container: 3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7" id=e8640961-2e91-49cd-9b2f-8455c3c9fc3c name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:46:33 embed-certs-756339 crio[568]: time="2025-11-23T08:46:33.774392543Z" level=info msg="Started container" PID=1633 containerID=3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper id=e8640961-2e91-49cd-9b2f-8455c3c9fc3c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8a5636aaf38911a1dab4ae0cf765cbbdce95abdbf16db58a10e206423f7a06f
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.235167227Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b0d9131d-288c-42ac-abe1-75cea9830988 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.237655687Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2cb93beb-b609-45c8-8ea3-9b1a22eaadfb name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.240222416Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper" id=c0879848-e289-454f-93aa-17e82bafebaa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.240343151Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.247580488Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.248232193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.275304605Z" level=info msg="Created container 2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper" id=c0879848-e289-454f-93aa-17e82bafebaa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.27586203Z" level=info msg="Starting container: 2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0" id=91e0c3e3-a653-4f27-b221-e39abee81f62 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:46:34 embed-certs-756339 crio[568]: time="2025-11-23T08:46:34.277384303Z" level=info msg="Started container" PID=1644 containerID=2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper id=91e0c3e3-a653-4f27-b221-e39abee81f62 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8a5636aaf38911a1dab4ae0cf765cbbdce95abdbf16db58a10e206423f7a06f
	Nov 23 08:46:35 embed-certs-756339 crio[568]: time="2025-11-23T08:46:35.240505639Z" level=info msg="Removing container: 3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7" id=4103dd4e-cd0b-4bcb-be8f-631eedfe1494 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:46:35 embed-certs-756339 crio[568]: time="2025-11-23T08:46:35.251341583Z" level=info msg="Removed container 3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55/dashboard-metrics-scraper" id=4103dd4e-cd0b-4bcb-be8f-631eedfe1494 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.971036694Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=9a4aa898-1428-4f99-bc79-6178ac0afff5 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.971663606Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f5c8ee92-381b-4e1b-9754-26dd8ac70f17 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.973165398Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1c4997d9-d05e-4405-89b7-944dc97328d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.976746791Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv/kubernetes-dashboard" id=b2699d5f-1e9e-4fce-92c7-ffd55e2aeee8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.976848367Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.980432796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.980598898Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/63d171f613f5e892d49d5c8703e16c9e259f373f11d466ffd1054ca5de136c56/merged/etc/group: no such file or directory"
	Nov 23 08:46:36 embed-certs-756339 crio[568]: time="2025-11-23T08:46:36.980917748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:46:37 embed-certs-756339 crio[568]: time="2025-11-23T08:46:37.006366258Z" level=info msg="Created container 01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv/kubernetes-dashboard" id=b2699d5f-1e9e-4fce-92c7-ffd55e2aeee8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:46:37 embed-certs-756339 crio[568]: time="2025-11-23T08:46:37.006932211Z" level=info msg="Starting container: 01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a" id=866925e3-731b-4ef3-a856-7bf0709cc81a name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:46:37 embed-certs-756339 crio[568]: time="2025-11-23T08:46:37.008443686Z" level=info msg="Started container" PID=1696 containerID=01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv/kubernetes-dashboard id=866925e3-731b-4ef3-a856-7bf0709cc81a name=/runtime.v1.RuntimeService/StartContainer sandboxID=cdbb38a0bdf5b0e60729384c37f2cdb9aca04f829da50ac5108b34a070c1bbf9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	01ad288a6ce4d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   17 seconds ago      Running             kubernetes-dashboard        0                   cdbb38a0bdf5b       kubernetes-dashboard-855c9754f9-zs7hv        kubernetes-dashboard
	2b807150e95f4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   1                   e8a5636aaf389       dashboard-metrics-scraper-6ffb444bf9-fgk55   kubernetes-dashboard
	bbbf5811d98e5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           28 seconds ago      Running             coredns                     0                   7ce5ff51b870c       coredns-66bc5c9577-ffmn2                     kube-system
	be4e071c05335       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           28 seconds ago      Running             busybox                     1                   ead5272d793cf       busybox                                      default
	114c2a65428ab       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           31 seconds ago      Running             kube-proxy                  0                   038b49b3ae23c       kube-proxy-npnsh                             kube-system
	40a2d025621f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           31 seconds ago      Exited              storage-provisioner         0                   0b87ad4dec4ca       storage-provisioner                          kube-system
	667faaf0e8e58       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           31 seconds ago      Running             kindnet-cni                 0                   20a5291bafc14       kindnet-4hsx6                                kube-system
	fecd94a1c38bb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           34 seconds ago      Running             kube-controller-manager     0                   ed438f70ab71e       kube-controller-manager-embed-certs-756339   kube-system
	9fedd2b23bc11       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           34 seconds ago      Running             kube-scheduler              0                   469fe93b05de4       kube-scheduler-embed-certs-756339            kube-system
	4ff763a88033c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           34 seconds ago      Running             etcd                        0                   80533651da939       etcd-embed-certs-756339                      kube-system
	e5104273ac6da       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           34 seconds ago      Running             kube-apiserver              0                   0e3eb146a4bd0       kube-apiserver-embed-certs-756339            kube-system
	
	
	==> coredns [bbbf5811d98e599301ff4819a115c3d8ef0030269a4475f9e2870b48cf71a5a6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40087 - 57775 "HINFO IN 5637573327108728621.246995809119248048. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.059543582s
	
	
	==> describe nodes <==
	Name:               embed-certs-756339
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-756339
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e219827a5f064cf736992b79e59864301ece66e
	                    minikube.k8s.io/name=embed-certs-756339
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_45_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:45:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-756339
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:46:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:46:31 +0000   Sun, 23 Nov 2025 08:45:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:46:31 +0000   Sun, 23 Nov 2025 08:45:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:46:31 +0000   Sun, 23 Nov 2025 08:45:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:46:31 +0000   Sun, 23 Nov 2025 08:46:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-756339
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d012ad2e-0684-44d6-8937-6f0e3eaafce4
	  Boot ID:                    a4cf5a76-4221-4eeb-bf78-86ca81e56134
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 coredns-66bc5c9577-ffmn2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     83s
	  kube-system                 etcd-embed-certs-756339                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         90s
	  kube-system                 kindnet-4hsx6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      83s
	  kube-system                 kube-apiserver-embed-certs-756339             250m (3%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-embed-certs-756339    200m (2%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-npnsh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-embed-certs-756339             100m (1%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fgk55    0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zs7hv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 82s                kube-proxy       
	  Normal  Starting                 31s                kube-proxy       
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  93s (x8 over 93s)  kubelet          Node embed-certs-756339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s (x8 over 93s)  kubelet          Node embed-certs-756339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s (x8 over 93s)  kubelet          Node embed-certs-756339 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node embed-certs-756339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node embed-certs-756339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     89s                kubelet          Node embed-certs-756339 status is now: NodeHasSufficientPID
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           84s                node-controller  Node embed-certs-756339 event: Registered Node embed-certs-756339 in Controller
	  Normal  NodeReady                72s                kubelet          Node embed-certs-756339 status is now: NodeReady
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node embed-certs-756339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node embed-certs-756339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node embed-certs-756339 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node embed-certs-756339 event: Registered Node embed-certs-756339 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[ +17.977414] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 35 c1 7e bf b6 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 72 3e da ab 9b 8a 08 06
	[Nov23 08:42] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 5a 72 3c 2a 6e 23 08 06
	[  +0.024933] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +22.812257] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 0a 15 3f bb 8c c0 08 06
	[  +0.000290] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 8b 2d 84 e6 5f 08 06
	[ +11.996457] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[  +1.172394] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	[Nov23 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 94 76 6f 60 44 08 06
	[  +0.000369] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 9f d9 1c 8f 74 08 06
	[ +30.986796] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b1 f5 c8 c3 5a 08 06
	[  +0.000482] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 48 25 9a 7b 69 08 06
	
	
	==> etcd [4ff763a88033c7e27e70d73ddb7f7e5a0438c94735aa92da4b55a18a0ee6a230] <==
	{"level":"warn","ts":"2025-11-23T08:46:20.553419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.560794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.571184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.578357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.585056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.592072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.598278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.606900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.613525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.620484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.630888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.637521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.644321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.659587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.665905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.672451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.678306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.684598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.690421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.696908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.704296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.719190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.725094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.730907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:46:20.778272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33346","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:46:54 up  1:29,  0 user,  load average: 2.09, 3.29, 2.37
	Linux embed-certs-756339 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [667faaf0e8e58c57d611bef19454dabbf3702a12f04127678f55004f0d720ff5] <==
	I1123 08:46:22.656677       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:46:22.656918       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 08:46:22.657017       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:46:22.657032       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:46:22.657054       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:46:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:46:22.858673       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:46:22.859023       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:46:22.859065       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:46:22.859563       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:46:52.860167       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 08:46:52.860170       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 08:46:52.860167       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:46:52.860167       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1123 08:46:54.459662       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:46:54.459702       1 metrics.go:72] Registering metrics
	I1123 08:46:54.459792       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [e5104273ac6da10ca351abb35f39ede96c9db87366edff5ea0c38cceb92ced59] <==
	I1123 08:46:21.220268       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 08:46:21.220282       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 08:46:21.220543       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 08:46:21.220341       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 08:46:21.220293       1 aggregator.go:171] initial CRD sync complete...
	I1123 08:46:21.221030       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 08:46:21.221037       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:46:21.221043       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:46:21.220314       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 08:46:21.220325       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1123 08:46:21.225578       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 08:46:21.228000       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 08:46:21.253620       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:46:21.267983       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 08:46:21.443586       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:46:21.467647       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:46:21.482875       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:46:21.489159       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:46:21.494589       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:46:21.521717       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.131.251"}
	I1123 08:46:21.531361       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.49.200"}
	I1123 08:46:22.121851       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:46:24.755824       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:46:24.956335       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:46:25.006189       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fecd94a1c38bb8309076aa16021357b70119445d767154a26c2dff547a65ebbc] <==
	I1123 08:46:24.513355       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:46:24.513362       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:46:24.515014       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:46:24.551814       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:46:24.552831       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:46:24.552852       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:46:24.552864       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:46:24.553019       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:46:24.553032       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 08:46:24.553096       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:46:24.553124       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 08:46:24.553266       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:46:24.553399       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:46:24.553569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 08:46:24.554395       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 08:46:24.554500       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 08:46:24.555642       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:46:24.556804       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:46:24.557928       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:46:24.559005       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:46:24.565268       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:46:24.566456       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:46:24.569722       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:46:24.575934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:46:34.479589       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [114c2a65428abda378156b9d44f78ab253febe754033d2ee3d3e166424ad8c09] <==
	I1123 08:46:22.548891       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:46:22.602378       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:46:22.703352       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:46:22.703387       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 08:46:22.703469       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:46:22.720115       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:46:22.720171       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:46:22.725172       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:46:22.725627       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:46:22.725657       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:46:22.727066       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:46:22.727085       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:46:22.727136       1 config.go:200] "Starting service config controller"
	I1123 08:46:22.727229       1 config.go:309] "Starting node config controller"
	I1123 08:46:22.727283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:46:22.727657       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:46:22.727681       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:46:22.727727       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:46:22.727218       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:46:22.828812       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:46:22.828841       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:46:22.828861       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [9fedd2b23bc112a664b2a93370ec35729bf335846cb1900ce075ccb4249a78bc] <==
	I1123 08:46:20.564560       1 serving.go:386] Generated self-signed cert in-memory
	W1123 08:46:21.135537       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 08:46:21.135666       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:46:21.135680       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 08:46:21.135711       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 08:46:21.172573       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:46:21.172608       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:46:21.175353       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:46:21.175396       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:46:21.175872       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:46:21.176219       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:46:21.276597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:46:22 embed-certs-756339 kubelet[732]: E1123 08:46:22.962751     732 projected.go:196] Error preparing data for projected volume kube-api-access-wmslr for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 23 08:46:22 embed-certs-756339 kubelet[732]: E1123 08:46:22.962816     732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d266def-c91d-4fd0-b04a-42a6fd90082f-kube-api-access-wmslr podName:9d266def-c91d-4fd0-b04a-42a6fd90082f nodeName:}" failed. No retries permitted until 2025-11-23 08:46:23.962797235 +0000 UTC m=+4.858298441 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wmslr" (UniqueName: "kubernetes.io/projected/9d266def-c91d-4fd0-b04a-42a6fd90082f-kube-api-access-wmslr") pod "busybox" (UID: "9d266def-c91d-4fd0-b04a-42a6fd90082f") : object "default"/"kube-root-ca.crt" not registered
	Nov 23 08:46:23 embed-certs-756339 kubelet[732]: E1123 08:46:23.867779     732 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 23 08:46:23 embed-certs-756339 kubelet[732]: E1123 08:46:23.867860     732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de386500-381b-43aa-9998-52ac07eb6db3-config-volume podName:de386500-381b-43aa-9998-52ac07eb6db3 nodeName:}" failed. No retries permitted until 2025-11-23 08:46:25.867846273 +0000 UTC m=+6.763347467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/de386500-381b-43aa-9998-52ac07eb6db3-config-volume") pod "coredns-66bc5c9577-ffmn2" (UID: "de386500-381b-43aa-9998-52ac07eb6db3") : object "kube-system"/"coredns" not registered
	Nov 23 08:46:23 embed-certs-756339 kubelet[732]: E1123 08:46:23.968217     732 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 23 08:46:23 embed-certs-756339 kubelet[732]: E1123 08:46:23.968255     732 projected.go:196] Error preparing data for projected volume kube-api-access-wmslr for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 23 08:46:23 embed-certs-756339 kubelet[732]: E1123 08:46:23.968334     732 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d266def-c91d-4fd0-b04a-42a6fd90082f-kube-api-access-wmslr podName:9d266def-c91d-4fd0-b04a-42a6fd90082f nodeName:}" failed. No retries permitted until 2025-11-23 08:46:25.968314403 +0000 UTC m=+6.863815612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wmslr" (UniqueName: "kubernetes.io/projected/9d266def-c91d-4fd0-b04a-42a6fd90082f-kube-api-access-wmslr") pod "busybox" (UID: "9d266def-c91d-4fd0-b04a-42a6fd90082f") : object "default"/"kube-root-ca.crt" not registered
	Nov 23 08:46:31 embed-certs-756339 kubelet[732]: I1123 08:46:31.711887     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llnsh\" (UniqueName: \"kubernetes.io/projected/068df84f-d0fd-4037-a87f-270fb7ce8b9c-kube-api-access-llnsh\") pod \"kubernetes-dashboard-855c9754f9-zs7hv\" (UID: \"068df84f-d0fd-4037-a87f-270fb7ce8b9c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv"
	Nov 23 08:46:31 embed-certs-756339 kubelet[732]: I1123 08:46:31.711933     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzthm\" (UniqueName: \"kubernetes.io/projected/f6240862-cccf-46a2-8a05-6679b6cd3746-kube-api-access-wzthm\") pod \"dashboard-metrics-scraper-6ffb444bf9-fgk55\" (UID: \"f6240862-cccf-46a2-8a05-6679b6cd3746\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55"
	Nov 23 08:46:31 embed-certs-756339 kubelet[732]: I1123 08:46:31.711951     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6240862-cccf-46a2-8a05-6679b6cd3746-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fgk55\" (UID: \"f6240862-cccf-46a2-8a05-6679b6cd3746\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55"
	Nov 23 08:46:31 embed-certs-756339 kubelet[732]: I1123 08:46:31.711974     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/068df84f-d0fd-4037-a87f-270fb7ce8b9c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-zs7hv\" (UID: \"068df84f-d0fd-4037-a87f-270fb7ce8b9c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv"
	Nov 23 08:46:34 embed-certs-756339 kubelet[732]: I1123 08:46:34.234774     732 scope.go:117] "RemoveContainer" containerID="3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7"
	Nov 23 08:46:34 embed-certs-756339 kubelet[732]: I1123 08:46:34.719195     732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 08:46:35 embed-certs-756339 kubelet[732]: I1123 08:46:35.239057     732 scope.go:117] "RemoveContainer" containerID="3c745662603a434048040f9b7c5a242bd028c63a7d1c285df9c9483c3e9bede7"
	Nov 23 08:46:35 embed-certs-756339 kubelet[732]: I1123 08:46:35.239157     732 scope.go:117] "RemoveContainer" containerID="2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0"
	Nov 23 08:46:35 embed-certs-756339 kubelet[732]: E1123 08:46:35.239367     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgk55_kubernetes-dashboard(f6240862-cccf-46a2-8a05-6679b6cd3746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55" podUID="f6240862-cccf-46a2-8a05-6679b6cd3746"
	Nov 23 08:46:36 embed-certs-756339 kubelet[732]: I1123 08:46:36.244355     732 scope.go:117] "RemoveContainer" containerID="2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0"
	Nov 23 08:46:36 embed-certs-756339 kubelet[732]: E1123 08:46:36.244540     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgk55_kubernetes-dashboard(f6240862-cccf-46a2-8a05-6679b6cd3746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55" podUID="f6240862-cccf-46a2-8a05-6679b6cd3746"
	Nov 23 08:46:37 embed-certs-756339 kubelet[732]: I1123 08:46:37.257183     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zs7hv" podStartSLOduration=7.158908334 podStartE2EDuration="12.257162324s" podCreationTimestamp="2025-11-23 08:46:25 +0000 UTC" firstStartedPulling="2025-11-23 08:46:31.874390862 +0000 UTC m=+12.769892060" lastFinishedPulling="2025-11-23 08:46:36.972644835 +0000 UTC m=+17.868146050" observedRunningTime="2025-11-23 08:46:37.257113026 +0000 UTC m=+18.152614260" watchObservedRunningTime="2025-11-23 08:46:37.257162324 +0000 UTC m=+18.152663535"
	Nov 23 08:46:41 embed-certs-756339 kubelet[732]: I1123 08:46:41.854035     732 scope.go:117] "RemoveContainer" containerID="2b807150e95f43263a7e4329bf937197087e75d4ee4f79779cdf0cd30a3475a0"
	Nov 23 08:46:41 embed-certs-756339 kubelet[732]: E1123 08:46:41.854212     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fgk55_kubernetes-dashboard(f6240862-cccf-46a2-8a05-6679b6cd3746)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fgk55" podUID="f6240862-cccf-46a2-8a05-6679b6cd3746"
	Nov 23 08:46:49 embed-certs-756339 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 08:46:49 embed-certs-756339 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 08:46:49 embed-certs-756339 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 08:46:49 embed-certs-756339 systemd[1]: kubelet.service: Consumed 1.039s CPU time.
	
	
	==> kubernetes-dashboard [01ad288a6ce4d665ac9a970f891f62e0cebc0a8b6f663ea6a7277bc4b8b4232a] <==
	2025/11/23 08:46:37 Using namespace: kubernetes-dashboard
	2025/11/23 08:46:37 Using in-cluster config to connect to apiserver
	2025/11/23 08:46:37 Using secret token for csrf signing
	2025/11/23 08:46:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:46:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:46:37 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:46:37 Generating JWE encryption key
	2025/11/23 08:46:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:46:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:46:37 Initializing JWE encryption key from synchronized object
	2025/11/23 08:46:37 Creating in-cluster Sidecar client
	2025/11/23 08:46:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 08:46:37 Serving insecurely on HTTP port: 9090
	2025/11/23 08:46:37 Starting overwatch
	
	
	==> storage-provisioner [40a2d025621f3f6b23ef4784628f70db522cb9678d3dc68a626456ef60906012] <==
	I1123 08:46:22.525332       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:46:52.528065       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756339 -n embed-certs-756339
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756339 -n embed-certs-756339: exit status 2 (321.160038ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-756339 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.74s)

                                                
                                    

Test pass (264/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 3.91
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.35
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.78
22 TestOffline 54.11
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 123.2
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 7.39
48 TestAddons/StoppedEnableDisable 16.61
49 TestCertOptions 27.83
50 TestCertExpiration 224.04
52 TestForceSystemdFlag 28.6
53 TestForceSystemdEnv 31.88
58 TestErrorSpam/setup 21.3
59 TestErrorSpam/start 0.62
60 TestErrorSpam/status 0.92
61 TestErrorSpam/pause 6.74
62 TestErrorSpam/unpause 5.04
63 TestErrorSpam/stop 8.03
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 69.6
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 14.53
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.57
75 TestFunctional/serial/CacheCmd/cache/add_local 1.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 97.07
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.14
86 TestFunctional/serial/LogsFileCmd 1.16
87 TestFunctional/serial/InvalidService 4.52
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 6.83
91 TestFunctional/parallel/DryRun 0.62
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 0.93
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 25.68
101 TestFunctional/parallel/SSHCmd 0.59
102 TestFunctional/parallel/CpCmd 1.93
103 TestFunctional/parallel/MySQL 14.9
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.61
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.56
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.47
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.07
121 TestFunctional/parallel/ImageCommands/Setup 0.96
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.21
128 TestFunctional/parallel/ProfileCmd/profile_list 0.44
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/MountCmd/any-port 5.86
145 TestFunctional/parallel/MountCmd/specific-port 1.87
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
150 TestFunctional/parallel/ServiceCmd/List 1.69
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 133.23
163 TestMultiControlPlane/serial/DeployApp 4.12
164 TestMultiControlPlane/serial/PingHostFromPods 0.99
165 TestMultiControlPlane/serial/AddWorkerNode 53.79
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
168 TestMultiControlPlane/serial/CopyFile 16.76
169 TestMultiControlPlane/serial/StopSecondaryNode 13.75
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.6
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 127.28
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.45
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
176 TestMultiControlPlane/serial/StopCluster 43.75
177 TestMultiControlPlane/serial/RestartCluster 55.57
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
179 TestMultiControlPlane/serial/AddSecondaryNode 69.75
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
185 TestJSONOutput/start/Command 37.97
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.04
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 29.38
211 TestKicCustomNetwork/use_default_bridge_network 22.35
212 TestKicExistingNetwork 27.61
213 TestKicCustomSubnet 26.61
214 TestKicStaticIP 23.48
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 49.61
219 TestMountStart/serial/StartWithMountFirst 4.67
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 4.68
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.63
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.23
226 TestMountStart/serial/RestartStopped 7.31
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 88.58
231 TestMultiNode/serial/DeployApp2Nodes 3.24
232 TestMultiNode/serial/PingHostFrom2Pods 0.68
233 TestMultiNode/serial/AddNode 23.37
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.64
236 TestMultiNode/serial/CopyFile 9.55
237 TestMultiNode/serial/StopNode 2.21
238 TestMultiNode/serial/StartAfterStop 7.12
239 TestMultiNode/serial/RestartKeepsNodes 57.27
240 TestMultiNode/serial/DeleteNode 4.98
241 TestMultiNode/serial/StopMultiNode 28.37
242 TestMultiNode/serial/RestartMultiNode 31.83
243 TestMultiNode/serial/ValidateNameConflict 25.58
248 TestPreload 80.25
250 TestScheduledStopUnix 99.32
253 TestInsufficientStorage 12.24
254 TestRunningBinaryUpgrade 47.18
256 TestKubernetesUpgrade 306.17
257 TestMissingContainerUpgrade 97.99
259 TestPause/serial/Start 81.06
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
262 TestNoKubernetes/serial/StartWithK8s 21.43
263 TestNoKubernetes/serial/StartWithStopK8s 18.18
271 TestNetworkPlugins/group/false 3.25
275 TestNoKubernetes/serial/Start 4.16
276 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
278 TestNoKubernetes/serial/ProfileList 1.78
279 TestNoKubernetes/serial/Stop 1.27
280 TestNoKubernetes/serial/StartNoArgs 6.53
281 TestPause/serial/SecondStartNoReconfiguration 6.03
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
284 TestStoppedBinaryUpgrade/Setup 0.55
285 TestStoppedBinaryUpgrade/Upgrade 76.17
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
294 TestNetworkPlugins/group/auto/Start 38.92
295 TestNetworkPlugins/group/kindnet/Start 72.55
296 TestNetworkPlugins/group/calico/Start 51.35
297 TestNetworkPlugins/group/auto/KubeletFlags 0.3
298 TestNetworkPlugins/group/auto/NetCatPod 9.2
299 TestNetworkPlugins/group/auto/DNS 0.11
300 TestNetworkPlugins/group/auto/Localhost 0.17
301 TestNetworkPlugins/group/auto/HairPin 0.09
302 TestNetworkPlugins/group/custom-flannel/Start 49.3
303 TestNetworkPlugins/group/calico/ControllerPod 6
304 TestNetworkPlugins/group/calico/KubeletFlags 0.29
305 TestNetworkPlugins/group/calico/NetCatPod 9.25
306 TestNetworkPlugins/group/calico/DNS 0.11
307 TestNetworkPlugins/group/calico/Localhost 0.08
308 TestNetworkPlugins/group/calico/HairPin 0.08
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
311 TestNetworkPlugins/group/kindnet/NetCatPod 8.19
312 TestNetworkPlugins/group/kindnet/DNS 0.11
313 TestNetworkPlugins/group/kindnet/Localhost 0.1
314 TestNetworkPlugins/group/kindnet/HairPin 0.09
315 TestNetworkPlugins/group/enable-default-cni/Start 42.74
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.53
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.32
318 TestNetworkPlugins/group/custom-flannel/DNS 0.11
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
321 TestNetworkPlugins/group/flannel/Start 53.08
322 TestNetworkPlugins/group/bridge/Start 69.43
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.16
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.1
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.08
328 TestNetworkPlugins/group/flannel/ControllerPod 6
330 TestStartStop/group/old-k8s-version/serial/FirstStart 53.11
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
332 TestNetworkPlugins/group/flannel/NetCatPod 10.74
333 TestNetworkPlugins/group/flannel/DNS 0.11
334 TestNetworkPlugins/group/flannel/Localhost 0.08
335 TestNetworkPlugins/group/flannel/HairPin 0.09
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
337 TestNetworkPlugins/group/bridge/NetCatPod 11.21
339 TestStartStop/group/no-preload/serial/FirstStart 55.64
341 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 48.53
342 TestNetworkPlugins/group/bridge/DNS 0.14
343 TestNetworkPlugins/group/bridge/Localhost 0.11
344 TestNetworkPlugins/group/bridge/HairPin 0.16
345 TestStartStop/group/old-k8s-version/serial/DeployApp 9.28
347 TestStartStop/group/old-k8s-version/serial/Stop 17.84
349 TestStartStop/group/newest-cni/serial/FirstStart 25.72
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
351 TestStartStop/group/old-k8s-version/serial/SecondStart 43.81
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
353 TestStartStop/group/no-preload/serial/DeployApp 8.22
354 TestStartStop/group/newest-cni/serial/DeployApp 0
356 TestStartStop/group/newest-cni/serial/Stop 12.66
358 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.18
360 TestStartStop/group/no-preload/serial/Stop 16.33
361 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
362 TestStartStop/group/newest-cni/serial/SecondStart 11.02
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.9
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
366 TestStartStop/group/no-preload/serial/SecondStart 51.13
367 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
369 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
371 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
373 TestStartStop/group/embed-certs/serial/FirstStart 44.15
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
377 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
379 TestStartStop/group/embed-certs/serial/DeployApp 8.21
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
381 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
384 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
387 TestStartStop/group/embed-certs/serial/Stop 16.36
388 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
389 TestStartStop/group/embed-certs/serial/SecondStart 25.29
390 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
392 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (3.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-212537 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-212537 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.907729354s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (3.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 07:55:27.445388   14488 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1123 07:55:27.445463   14488 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-212537
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-212537: exit status 85 (67.699651ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-212537 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-212537 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:55:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:55:23.588702   14501 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:55:23.588785   14501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:23.588792   14501 out.go:374] Setting ErrFile to fd 2...
	I1123 07:55:23.588796   14501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:23.588939   14501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	W1123 07:55:23.589037   14501 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21966-10964/.minikube/config/config.json: open /home/jenkins/minikube-integration/21966-10964/.minikube/config/config.json: no such file or directory
	I1123 07:55:23.589459   14501 out.go:368] Setting JSON to true
	I1123 07:55:23.590790   14501 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2271,"bootTime":1763882253,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 07:55:23.590838   14501 start.go:143] virtualization: kvm guest
	I1123 07:55:23.594941   14501 out.go:99] [download-only-212537] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1123 07:55:23.595068   14501 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 07:55:23.595126   14501 notify.go:221] Checking for updates...
	I1123 07:55:23.596070   14501 out.go:171] MINIKUBE_LOCATION=21966
	I1123 07:55:23.597137   14501 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:55:23.598190   14501 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 07:55:23.599443   14501 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 07:55:23.600573   14501 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 07:55:23.602504   14501 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 07:55:23.602699   14501 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:55:23.627178   14501 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 07:55:23.627248   14501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:24.009483   14501 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-23 07:55:23.999871536 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:24.009581   14501 docker.go:319] overlay module found
	I1123 07:55:24.011121   14501 out.go:99] Using the docker driver based on user configuration
	I1123 07:55:24.011151   14501 start.go:309] selected driver: docker
	I1123 07:55:24.011159   14501 start.go:927] validating driver "docker" against <nil>
	I1123 07:55:24.011264   14501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:24.068439   14501 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-23 07:55:24.059908642 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:24.068616   14501 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:55:24.069115   14501 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 07:55:24.069291   14501 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 07:55:24.070746   14501 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-212537 host does not exist
	  To start a cluster, run: "minikube start -p download-only-212537"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-212537
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-071935 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-071935 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.35131238s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 07:55:31.204829   14488 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1123 07:55:31.204869   14488 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-071935
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-071935: exit status 85 (68.330076ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-212537 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-212537 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ delete  │ -p download-only-212537                                                                                                                                                   │ download-only-212537 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │ 23 Nov 25 07:55 UTC │
	│ start   │ -o=json --download-only -p download-only-071935 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-071935 │ jenkins │ v1.37.0 │ 23 Nov 25 07:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 07:55:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 07:55:27.902881   14865 out.go:360] Setting OutFile to fd 1 ...
	I1123 07:55:27.903130   14865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:27.903139   14865 out.go:374] Setting ErrFile to fd 2...
	I1123 07:55:27.903144   14865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 07:55:27.903344   14865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 07:55:27.903777   14865 out.go:368] Setting JSON to true
	I1123 07:55:27.904627   14865 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2275,"bootTime":1763882253,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 07:55:27.904703   14865 start.go:143] virtualization: kvm guest
	I1123 07:55:27.906516   14865 out.go:99] [download-only-071935] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 07:55:27.906645   14865 notify.go:221] Checking for updates...
	I1123 07:55:27.907736   14865 out.go:171] MINIKUBE_LOCATION=21966
	I1123 07:55:27.908978   14865 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 07:55:27.910141   14865 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 07:55:27.911220   14865 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 07:55:27.912189   14865 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 07:55:27.914065   14865 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 07:55:27.914240   14865 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 07:55:27.935799   14865 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 07:55:27.935879   14865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:27.997166   14865 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-23 07:55:27.987269577 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:27.997273   14865 docker.go:319] overlay module found
	I1123 07:55:27.998655   14865 out.go:99] Using the docker driver based on user configuration
	I1123 07:55:27.998678   14865 start.go:309] selected driver: docker
	I1123 07:55:27.998692   14865 start.go:927] validating driver "docker" against <nil>
	I1123 07:55:27.998771   14865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 07:55:28.051357   14865 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-23 07:55:28.042213198 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 07:55:28.051503   14865 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 07:55:28.051996   14865 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 07:55:28.052164   14865 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 07:55:28.053722   14865 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-071935 host does not exist
	  To start a cluster, run: "minikube start -p download-only-071935"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-071935
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-372793 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-372793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-372793
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.78s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 07:55:32.261772   14488 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-218443 --alsologtostderr --binary-mirror http://127.0.0.1:42195 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-218443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-218443
--- PASS: TestBinaryMirror (0.78s)

                                                
                                    
x
+
TestOffline (54.11s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-627232 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-627232 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (51.66449965s)
helpers_test.go:175: Cleaning up "offline-crio-627232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-627232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-627232: (2.448497721s)
--- PASS: TestOffline (54.11s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-959783
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-959783: exit status 85 (63.971824ms)

                                                
                                                
-- stdout --
	* Profile "addons-959783" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-959783"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-959783
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-959783: exit status 85 (63.995027ms)

                                                
                                                
-- stdout --
	* Profile "addons-959783" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-959783"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (123.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-959783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-959783 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.195469165s)
--- PASS: TestAddons/Setup (123.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-959783 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-959783 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.39s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-959783 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-959783 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9e7c6935-35b9-43c7-ab53-153ff628441d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9e7c6935-35b9-43c7-ab53-153ff628441d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003302221s
addons_test.go:694: (dbg) Run:  kubectl --context addons-959783 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-959783 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-959783 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-959783
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-959783: (16.34481507s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-959783
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-959783
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-959783
--- PASS: TestAddons/StoppedEnableDisable (16.61s)

                                                
                                    
x
+
TestCertOptions (27.83s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-795018 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1123 08:37:36.841258   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-795018 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.724261503s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-795018 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-795018 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-795018 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-795018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-795018
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-795018: (4.436586664s)
--- PASS: TestCertOptions (27.83s)

                                                
                                    
x
+
TestCertExpiration (224.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-747782 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-747782 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (34.231382396s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-747782 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-747782 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.254822616s)
helpers_test.go:175: Cleaning up "cert-expiration-747782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-747782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-747782: (4.555317111s)
--- PASS: TestCertExpiration (224.04s)

                                                
                                    
x
+
TestForceSystemdFlag (28.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-170661 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-170661 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.549747451s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-170661 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-170661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-170661
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-170661: (4.744109744s)
--- PASS: TestForceSystemdFlag (28.60s)

                                                
                                    
x
+
TestForceSystemdEnv (31.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-729509 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-729509 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.248200769s)
helpers_test.go:175: Cleaning up "force-systemd-env-729509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-729509
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-729509: (2.63258149s)
--- PASS: TestForceSystemdEnv (31.88s)

                                                
                                    
x
+
TestErrorSpam/setup (21.3s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-285730 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-285730 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-285730 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-285730 --driver=docker  --container-runtime=crio: (21.296688675s)
--- PASS: TestErrorSpam/setup (21.30s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (6.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 pause: exit status 80 (2.086542651s)

                                                
                                                
-- stdout --
	* Pausing node nospam-285730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:01:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 pause: exit status 80 (2.325306829s)

                                                
                                                
-- stdout --
	* Pausing node nospam-285730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:01:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 pause: exit status 80 (2.328601647s)

                                                
                                                
-- stdout --
	* Pausing node nospam-285730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.04s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 unpause: exit status 80 (1.821620913s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-285730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:01:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 unpause: exit status 80 (1.384863263s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-285730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:01:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 unpause: exit status 80 (1.831342763s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-285730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:01:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.04s)

                                                
                                    
x
+
TestErrorSpam/stop (8.03s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 stop: (7.834952312s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-285730 --log_dir /tmp/nospam-285730 stop
--- PASS: TestErrorSpam/stop (8.03s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21966-10964/.minikube/files/etc/test/nested/copy/14488/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762247 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1123 08:02:36.845830   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:02:36.852184   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:02:36.863492   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:02:36.884815   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:02:36.926136   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:02:37.007463   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:02:37.168883   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:02:37.490518   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:02:38.132529   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:02:39.414103   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:02:41.976913   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-762247 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.596381198s)
--- PASS: TestFunctional/serial/StartWithProxy (69.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (14.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 08:02:44.727033   14488 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762247 --alsologtostderr -v=8
E1123 08:02:47.098728   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:02:57.340500   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-762247 --alsologtostderr -v=8: (14.532334181s)
functional_test.go:678: soft start took 14.533013016s for "functional-762247" cluster.
I1123 08:02:59.259673   14488 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (14.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-762247 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-762247 /tmp/TestFunctionalserialCacheCmdcacheadd_local3401399696/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 cache add minikube-local-cache-test:functional-762247
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 cache delete minikube-local-cache-test:functional-762247
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-762247
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.91979ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 kubectl -- --context functional-762247 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-762247 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (97.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762247 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1123 08:03:17.822425   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:03:58.784825   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-762247 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m37.069943022s)
functional_test.go:776: restart took 1m37.070074451s for "functional-762247" cluster.
I1123 08:04:42.338608   14488 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (97.07s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-762247 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-762247 logs: (1.14432389s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 logs --file /tmp/TestFunctionalserialLogsFileCmd81412807/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-762247 logs --file /tmp/TestFunctionalserialLogsFileCmd81412807/001/logs.txt: (1.158282828s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.52s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-762247 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-762247
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-762247: exit status 115 (331.032188ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32425 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-762247 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-762247 delete -f testdata/invalidsvc.yaml: (1.032834324s)
--- PASS: TestFunctional/serial/InvalidService (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 config get cpus: exit status 14 (81.957322ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 config get cpus: exit status 14 (65.63537ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-762247 --alsologtostderr -v=1]
E1123 08:05:20.706883   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-762247 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 54313: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-762247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (393.434062ms)

                                                
                                                
-- stdout --
	* [functional-762247] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:05:17.507830   53893 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:05:17.508100   53893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:05:17.508110   53893 out.go:374] Setting ErrFile to fd 2...
	I1123 08:05:17.508114   53893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:05:17.508321   53893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:05:17.508811   53893 out.go:368] Setting JSON to false
	I1123 08:05:17.509729   53893 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2864,"bootTime":1763882253,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:05:17.509781   53893 start.go:143] virtualization: kvm guest
	I1123 08:05:17.712912   53893 out.go:179] * [functional-762247] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:05:17.752590   53893 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:05:17.752587   53893 notify.go:221] Checking for updates...
	I1123 08:05:17.754675   53893 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:05:17.755724   53893 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:05:17.756778   53893 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:05:17.757675   53893 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:05:17.758632   53893 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:05:17.760072   53893 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:05:17.760588   53893 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:05:17.783129   53893 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:05:17.783206   53893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:05:17.837935   53893 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-23 08:05:17.828783662 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:05:17.838040   53893 docker.go:319] overlay module found
	I1123 08:05:17.839575   53893 out.go:179] * Using the docker driver based on existing profile
	I1123 08:05:17.840597   53893 start.go:309] selected driver: docker
	I1123 08:05:17.840612   53893 start.go:927] validating driver "docker" against &{Name:functional-762247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-762247 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:05:17.840720   53893 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:05:17.842205   53893 out.go:203] 
	W1123 08:05:17.843231   53893 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 08:05:17.844172   53893 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762247 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-762247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-762247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.725947ms)

                                                
                                                
-- stdout --
	* [functional-762247] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:04:58.713505   49171 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:04:58.713801   49171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:04:58.713813   49171 out.go:374] Setting ErrFile to fd 2...
	I1123 08:04:58.713820   49171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:04:58.714258   49171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:04:58.714880   49171 out.go:368] Setting JSON to false
	I1123 08:04:58.716075   49171 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2846,"bootTime":1763882253,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:04:58.716150   49171 start.go:143] virtualization: kvm guest
	I1123 08:04:58.718506   49171 out.go:179] * [functional-762247] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1123 08:04:58.720080   49171 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:04:58.720107   49171 notify.go:221] Checking for updates...
	I1123 08:04:58.726066   49171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:04:58.727477   49171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:04:58.728637   49171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:04:58.729777   49171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:04:58.731167   49171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:04:58.732661   49171 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:04:58.733404   49171 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:04:58.761262   49171 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:04:58.761350   49171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:04:58.825364   49171 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-23 08:04:58.814300861 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:04:58.825451   49171 docker.go:319] overlay module found
	I1123 08:04:58.826927   49171 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 08:04:58.827942   49171 start.go:309] selected driver: docker
	I1123 08:04:58.827954   49171 start.go:927] validating driver "docker" against &{Name:functional-762247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-762247 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:04:58.828038   49171 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:04:58.829505   49171 out.go:203] 
	W1123 08:04:58.830408   49171 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 08:04:58.831335   49171 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4d0f89cd-cd53-4bf8-8f37-355ad8173db0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002970741s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-762247 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-762247 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-762247 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-762247 apply -f testdata/storage-provisioner/pod.yaml
I1123 08:04:56.171383   14488 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5ec76929-cbaf-435b-ad18-1ef6a06f4666] Pending
helpers_test.go:352: "sp-pod" [5ec76929-cbaf-435b-ad18-1ef6a06f4666] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5ec76929-cbaf-435b-ad18-1ef6a06f4666] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.002945859s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-762247 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-762247 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-762247 delete -f testdata/storage-provisioner/pod.yaml: (1.026298317s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-762247 apply -f testdata/storage-provisioner/pod.yaml
I1123 08:05:07.419533   14488 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [70c34ff2-47e1-4b6d-adfe-0a726e7e7059] Pending
helpers_test.go:352: "sp-pod" [70c34ff2-47e1-4b6d-adfe-0a726e7e7059] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [70c34ff2-47e1-4b6d-adfe-0a726e7e7059] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004489861s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-762247 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh -n functional-762247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 cp functional-762247:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1518080208/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh -n functional-762247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh -n functional-762247 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (14.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-762247 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-rm9t2" [dc1ecf1f-71c6-4594-ab22-27eba8a170f8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-rm9t2" [dc1ecf1f-71c6-4594-ab22-27eba8a170f8] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 12.051562468s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-762247 exec mysql-5bb876957f-rm9t2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-762247 exec mysql-5bb876957f-rm9t2 -- mysql -ppassword -e "show databases;": exit status 1 (108.873817ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 08:05:23.180414   14488 retry.go:31] will retry after 706.160125ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-762247 exec mysql-5bb876957f-rm9t2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-762247 exec mysql-5bb876957f-rm9t2 -- mysql -ppassword -e "show databases;": exit status 1 (80.10284ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 08:05:23.967719   14488 retry.go:31] will retry after 1.726994969s: exit status 1
2025/11/23 08:05:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1812: (dbg) Run:  kubectl --context functional-762247 exec mysql-5bb876957f-rm9t2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (14.90s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14488/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "sudo cat /etc/test/nested/copy/14488/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14488.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "sudo cat /etc/ssl/certs/14488.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14488.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "sudo cat /usr/share/ca-certificates/14488.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/144882.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "sudo cat /etc/ssl/certs/144882.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/144882.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "sudo cat /usr/share/ca-certificates/144882.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-762247 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 ssh "sudo systemctl is-active docker": exit status 1 (317.236198ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 ssh "sudo systemctl is-active containerd": exit status 1 (290.762706ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-762247 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-762247 image ls --format short --alsologtostderr:
I1123 08:05:25.475831   54805 out.go:360] Setting OutFile to fd 1 ...
I1123 08:05:25.475940   54805 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:05:25.475950   54805 out.go:374] Setting ErrFile to fd 2...
I1123 08:05:25.475956   54805 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:05:25.476138   54805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
I1123 08:05:25.476642   54805 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:05:25.476764   54805 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:05:25.477239   54805 cli_runner.go:164] Run: docker container inspect functional-762247 --format={{.State.Status}}
I1123 08:05:25.494359   54805 ssh_runner.go:195] Run: systemctl --version
I1123 08:05:25.494405   54805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-762247
I1123 08:05:25.510858   54805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/functional-762247/id_rsa Username:docker}
I1123 08:05:25.608523   54805 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-762247 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-762247 image ls --format table --alsologtostderr:
I1123 08:05:26.157658   55095 out.go:360] Setting OutFile to fd 1 ...
I1123 08:05:26.157797   55095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:05:26.157806   55095 out.go:374] Setting ErrFile to fd 2...
I1123 08:05:26.157813   55095 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:05:26.158005   55095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
I1123 08:05:26.158582   55095 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:05:26.158703   55095 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:05:26.159166   55095 cli_runner.go:164] Run: docker container inspect functional-762247 --format={{.State.Status}}
I1123 08:05:26.176940   55095 ssh_runner.go:195] Run: systemctl --version
I1123 08:05:26.176990   55095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-762247
I1123 08:05:26.193154   55095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/functional-762247/id_rsa Username:docker}
I1123 08:05:26.291659   55095 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-762247 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags"
:["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d2
45c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b
1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manag
er@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd27
7787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1
.34.1"],"size":"89046001"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-762247 image ls --format json --alsologtostderr:
I1123 08:05:25.933351   54984 out.go:360] Setting OutFile to fd 1 ...
I1123 08:05:25.933444   54984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:05:25.933450   54984 out.go:374] Setting ErrFile to fd 2...
I1123 08:05:25.933456   54984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:05:25.933659   54984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
I1123 08:05:25.934260   54984 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:05:25.934374   54984 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:05:25.934821   54984 cli_runner.go:164] Run: docker container inspect functional-762247 --format={{.State.Status}}
I1123 08:05:25.951967   54984 ssh_runner.go:195] Run: systemctl --version
I1123 08:05:25.952007   54984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-762247
I1123 08:05:25.967570   54984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/functional-762247/id_rsa Username:docker}
I1123 08:05:26.065870   54984 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-762247 image ls --format yaml --alsologtostderr:
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-762247 image ls --format yaml --alsologtostderr:
I1123 08:05:25.693803   54860 out.go:360] Setting OutFile to fd 1 ...
I1123 08:05:25.694037   54860 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:05:25.694045   54860 out.go:374] Setting ErrFile to fd 2...
I1123 08:05:25.694049   54860 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:05:25.694237   54860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
I1123 08:05:25.694731   54860 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:05:25.694850   54860 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:05:25.695347   54860 cli_runner.go:164] Run: docker container inspect functional-762247 --format={{.State.Status}}
I1123 08:05:25.714526   54860 ssh_runner.go:195] Run: systemctl --version
I1123 08:05:25.714570   54860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-762247
I1123 08:05:25.731909   54860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/functional-762247/id_rsa Username:docker}
I1123 08:05:25.833294   54860 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 ssh pgrep buildkitd: exit status 1 (273.497985ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image build -t localhost/my-image:functional-762247 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-762247 image build -t localhost/my-image:functional-762247 testdata/build --alsologtostderr: (1.578200838s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-762247 image build -t localhost/my-image:functional-762247 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5685189f134
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-762247
--> 9449785a00f
Successfully tagged localhost/my-image:functional-762247
9449785a00f3e181e35705da98a778c2686faf9eb80626cb73dbcfc6b5af1127
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-762247 image build -t localhost/my-image:functional-762247 testdata/build --alsologtostderr:
I1123 08:05:26.112776   55071 out.go:360] Setting OutFile to fd 1 ...
I1123 08:05:26.113034   55071 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:05:26.113043   55071 out.go:374] Setting ErrFile to fd 2...
I1123 08:05:26.113047   55071 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:05:26.113225   55071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
I1123 08:05:26.113734   55071 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:05:26.114247   55071 config.go:182] Loaded profile config "functional-762247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:05:26.114700   55071 cli_runner.go:164] Run: docker container inspect functional-762247 --format={{.State.Status}}
I1123 08:05:26.134781   55071 ssh_runner.go:195] Run: systemctl --version
I1123 08:05:26.134836   55071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-762247
I1123 08:05:26.152656   55071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/functional-762247/id_rsa Username:docker}
I1123 08:05:26.251380   55071 build_images.go:162] Building image from path: /tmp/build.617745034.tar
I1123 08:05:26.251444   55071 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 08:05:26.258842   55071 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.617745034.tar
I1123 08:05:26.262243   55071 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.617745034.tar: stat -c "%s %y" /var/lib/minikube/build/build.617745034.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.617745034.tar': No such file or directory
I1123 08:05:26.262277   55071 ssh_runner.go:362] scp /tmp/build.617745034.tar --> /var/lib/minikube/build/build.617745034.tar (3072 bytes)
I1123 08:05:26.278314   55071 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.617745034
I1123 08:05:26.285214   55071 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.617745034 -xf /var/lib/minikube/build/build.617745034.tar
I1123 08:05:26.292996   55071 crio.go:315] Building image: /var/lib/minikube/build/build.617745034
I1123 08:05:26.293045   55071 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-762247 /var/lib/minikube/build/build.617745034 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1123 08:05:27.609424   55071 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-762247 /var/lib/minikube/build/build.617745034 --cgroup-manager=cgroupfs: (1.316353539s)
I1123 08:05:27.609479   55071 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.617745034
I1123 08:05:27.617668   55071 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.617745034.tar
I1123 08:05:27.624832   55071 build_images.go:218] Built localhost/my-image:functional-762247 from /tmp/build.617745034.tar
I1123 08:05:27.624861   55071 build_images.go:134] succeeded building to: functional-762247
I1123 08:05:27.624865   55071 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image ls
E1123 08:07:36.840381   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:08:04.549012   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:12:36.840072   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-762247
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-762247 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-762247 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-762247 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-762247 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 47340: os: process already finished
helpers_test.go:519: unable to terminate pid 47082: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-762247 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-762247 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [9cc50bfc-f9fe-4987-83bb-2b49b4a84513] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [9cc50bfc-f9fe-4987-83bb-2b49b4a84513] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.00339486s
I1123 08:04:58.462611   14488 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "378.251623ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "64.941007ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "344.244968ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "58.346269ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image rm kicbase/echo-server:functional-762247 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-762247 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.211.37 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-762247 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-762247 /tmp/TestFunctionalparallelMountCmdany-port4020238849/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763885098837527986" to /tmp/TestFunctionalparallelMountCmdany-port4020238849/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763885098837527986" to /tmp/TestFunctionalparallelMountCmdany-port4020238849/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763885098837527986" to /tmp/TestFunctionalparallelMountCmdany-port4020238849/001/test-1763885098837527986
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (292.30521ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:04:59.130170   14488 retry.go:31] will retry after 691.359745ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 08:04 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 08:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 08:04 test-1763885098837527986
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh cat /mount-9p/test-1763885098837527986
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-762247 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d2170fb3-2525-4a8c-b655-713c5262d511] Pending
helpers_test.go:352: "busybox-mount" [d2170fb3-2525-4a8c-b655-713c5262d511] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d2170fb3-2525-4a8c-b655-713c5262d511] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d2170fb3-2525-4a8c-b655-713c5262d511] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003353008s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-762247 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762247 /tmp/TestFunctionalparallelMountCmdany-port4020238849/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-762247 /tmp/TestFunctionalparallelMountCmdspecific-port3322995438/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.605889ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:05:04.971036   14488 retry.go:31] will retry after 565.124902ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762247 /tmp/TestFunctionalparallelMountCmdspecific-port3322995438/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 ssh "sudo umount -f /mount-9p": exit status 1 (292.942509ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-762247 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762247 /tmp/TestFunctionalparallelMountCmdspecific-port3322995438/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-762247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234069852/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-762247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234069852/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-762247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234069852/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-762247 ssh "findmnt -T" /mount1: exit status 1 (330.153206ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:05:06.901866   14488 retry.go:31] will retry after 299.584515ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-762247 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234069852/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234069852/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-762247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4234069852/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-762247 service list: (1.691737176s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-762247 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-762247 service list -o json: (1.689606242s)
functional_test.go:1504: Took "1.689694734s" to run "out/minikube-linux-amd64 -p functional-762247 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-762247
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-762247
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-762247
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-390917 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m12.506353615s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (133.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-390917 kubectl -- rollout status deployment/busybox: (2.162267885s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-2gx2g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-gk596 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-rxbqt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-2gx2g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-gk596 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-rxbqt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-2gx2g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-gk596 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-rxbqt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-2gx2g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-2gx2g -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-gk596 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-gk596 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-rxbqt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 kubectl -- exec busybox-7b57f96db7-rxbqt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 node add --alsologtostderr -v 5
E1123 08:17:36.840266   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-390917 node add --alsologtostderr -v 5: (52.934298612s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-390917 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp testdata/cp-test.txt ha-390917:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2737736219/001/cp-test_ha-390917.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917:/home/docker/cp-test.txt ha-390917-m02:/home/docker/cp-test_ha-390917_ha-390917-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m02 "sudo cat /home/docker/cp-test_ha-390917_ha-390917-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917:/home/docker/cp-test.txt ha-390917-m03:/home/docker/cp-test_ha-390917_ha-390917-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m03 "sudo cat /home/docker/cp-test_ha-390917_ha-390917-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917:/home/docker/cp-test.txt ha-390917-m04:/home/docker/cp-test_ha-390917_ha-390917-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m04 "sudo cat /home/docker/cp-test_ha-390917_ha-390917-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp testdata/cp-test.txt ha-390917-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2737736219/001/cp-test_ha-390917-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m02:/home/docker/cp-test.txt ha-390917:/home/docker/cp-test_ha-390917-m02_ha-390917.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917 "sudo cat /home/docker/cp-test_ha-390917-m02_ha-390917.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m02:/home/docker/cp-test.txt ha-390917-m03:/home/docker/cp-test_ha-390917-m02_ha-390917-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m03 "sudo cat /home/docker/cp-test_ha-390917-m02_ha-390917-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m02:/home/docker/cp-test.txt ha-390917-m04:/home/docker/cp-test_ha-390917-m02_ha-390917-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m04 "sudo cat /home/docker/cp-test_ha-390917-m02_ha-390917-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp testdata/cp-test.txt ha-390917-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2737736219/001/cp-test_ha-390917-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m03:/home/docker/cp-test.txt ha-390917:/home/docker/cp-test_ha-390917-m03_ha-390917.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917 "sudo cat /home/docker/cp-test_ha-390917-m03_ha-390917.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m03:/home/docker/cp-test.txt ha-390917-m02:/home/docker/cp-test_ha-390917-m03_ha-390917-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m02 "sudo cat /home/docker/cp-test_ha-390917-m03_ha-390917-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m03:/home/docker/cp-test.txt ha-390917-m04:/home/docker/cp-test_ha-390917-m03_ha-390917-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m04 "sudo cat /home/docker/cp-test_ha-390917-m03_ha-390917-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp testdata/cp-test.txt ha-390917-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2737736219/001/cp-test_ha-390917-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m04:/home/docker/cp-test.txt ha-390917:/home/docker/cp-test_ha-390917-m04_ha-390917.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917 "sudo cat /home/docker/cp-test_ha-390917-m04_ha-390917.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m04:/home/docker/cp-test.txt ha-390917-m02:/home/docker/cp-test_ha-390917-m04_ha-390917-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m02 "sudo cat /home/docker/cp-test_ha-390917-m04_ha-390917-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 cp ha-390917-m04:/home/docker/cp-test.txt ha-390917-m03:/home/docker/cp-test_ha-390917-m04_ha-390917-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 ssh -n ha-390917-m03 "sudo cat /home/docker/cp-test_ha-390917-m04_ha-390917-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-390917 node stop m02 --alsologtostderr -v 5: (13.076889513s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-390917 status --alsologtostderr -v 5: exit status 7 (675.426947ms)

                                                
                                                
-- stdout --
	ha-390917
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-390917-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-390917-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-390917-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:18:48.925939   80062 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:18:48.926203   80062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:18:48.926213   80062 out.go:374] Setting ErrFile to fd 2...
	I1123 08:18:48.926217   80062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:18:48.926427   80062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:18:48.926572   80062 out.go:368] Setting JSON to false
	I1123 08:18:48.926600   80062 mustload.go:66] Loading cluster: ha-390917
	I1123 08:18:48.926677   80062 notify.go:221] Checking for updates...
	I1123 08:18:48.927094   80062 config.go:182] Loaded profile config "ha-390917": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:18:48.927114   80062 status.go:174] checking status of ha-390917 ...
	I1123 08:18:48.927624   80062 cli_runner.go:164] Run: docker container inspect ha-390917 --format={{.State.Status}}
	I1123 08:18:48.945850   80062 status.go:371] ha-390917 host status = "Running" (err=<nil>)
	I1123 08:18:48.945870   80062 host.go:66] Checking if "ha-390917" exists ...
	I1123 08:18:48.946116   80062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-390917
	I1123 08:18:48.964150   80062 host.go:66] Checking if "ha-390917" exists ...
	I1123 08:18:48.964368   80062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:18:48.964411   80062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-390917
	I1123 08:18:48.981308   80062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/ha-390917/id_rsa Username:docker}
	I1123 08:18:49.077537   80062 ssh_runner.go:195] Run: systemctl --version
	I1123 08:18:49.083421   80062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:18:49.094857   80062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:18:49.154541   80062 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-23 08:18:49.144983919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:18:49.155152   80062 kubeconfig.go:125] found "ha-390917" server: "https://192.168.49.254:8443"
	I1123 08:18:49.155179   80062 api_server.go:166] Checking apiserver status ...
	I1123 08:18:49.155218   80062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:18:49.167153   80062 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	W1123 08:18:49.175097   80062 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:18:49.175146   80062 ssh_runner.go:195] Run: ls
	I1123 08:18:49.178563   80062 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:18:49.182536   80062 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:18:49.182556   80062 status.go:463] ha-390917 apiserver status = Running (err=<nil>)
	I1123 08:18:49.182567   80062 status.go:176] ha-390917 status: &{Name:ha-390917 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:18:49.182590   80062 status.go:174] checking status of ha-390917-m02 ...
	I1123 08:18:49.182851   80062 cli_runner.go:164] Run: docker container inspect ha-390917-m02 --format={{.State.Status}}
	I1123 08:18:49.199922   80062 status.go:371] ha-390917-m02 host status = "Stopped" (err=<nil>)
	I1123 08:18:49.199941   80062 status.go:384] host is not running, skipping remaining checks
	I1123 08:18:49.199949   80062 status.go:176] ha-390917-m02 status: &{Name:ha-390917-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:18:49.199968   80062 status.go:174] checking status of ha-390917-m03 ...
	I1123 08:18:49.200223   80062 cli_runner.go:164] Run: docker container inspect ha-390917-m03 --format={{.State.Status}}
	I1123 08:18:49.217773   80062 status.go:371] ha-390917-m03 host status = "Running" (err=<nil>)
	I1123 08:18:49.217791   80062 host.go:66] Checking if "ha-390917-m03" exists ...
	I1123 08:18:49.218005   80062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-390917-m03
	I1123 08:18:49.233991   80062 host.go:66] Checking if "ha-390917-m03" exists ...
	I1123 08:18:49.234213   80062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:18:49.234249   80062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-390917-m03
	I1123 08:18:49.251141   80062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/ha-390917-m03/id_rsa Username:docker}
	I1123 08:18:49.347351   80062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:18:49.359256   80062 kubeconfig.go:125] found "ha-390917" server: "https://192.168.49.254:8443"
	I1123 08:18:49.359282   80062 api_server.go:166] Checking apiserver status ...
	I1123 08:18:49.359325   80062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:18:49.370025   80062 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W1123 08:18:49.377995   80062 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:18:49.378043   80062 ssh_runner.go:195] Run: ls
	I1123 08:18:49.381477   80062 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:18:49.385251   80062 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:18:49.385267   80062 status.go:463] ha-390917-m03 apiserver status = Running (err=<nil>)
	I1123 08:18:49.385274   80062 status.go:176] ha-390917-m03 status: &{Name:ha-390917-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:18:49.385292   80062 status.go:174] checking status of ha-390917-m04 ...
	I1123 08:18:49.385547   80062 cli_runner.go:164] Run: docker container inspect ha-390917-m04 --format={{.State.Status}}
	I1123 08:18:49.402554   80062 status.go:371] ha-390917-m04 host status = "Running" (err=<nil>)
	I1123 08:18:49.402573   80062 host.go:66] Checking if "ha-390917-m04" exists ...
	I1123 08:18:49.402849   80062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-390917-m04
	I1123 08:18:49.419949   80062 host.go:66] Checking if "ha-390917-m04" exists ...
	I1123 08:18:49.420177   80062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:18:49.420212   80062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-390917-m04
	I1123 08:18:49.437165   80062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/ha-390917-m04/id_rsa Username:docker}
	I1123 08:18:49.534118   80062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:18:49.545559   80062 status.go:176] ha-390917-m04 status: &{Name:ha-390917-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-390917 node start m02 --alsologtostderr -v 5: (7.692135492s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 stop --alsologtostderr -v 5
E1123 08:18:59.910954   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-390917 stop --alsologtostderr -v 5: (46.302028865s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 start --wait true --alsologtostderr -v 5
E1123 08:19:49.845295   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:49.851636   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:49.862922   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:49.884246   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:49.925563   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:50.006993   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:50.169127   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:50.490554   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:51.132680   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:52.414251   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:54.975742   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:20:00.097708   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:20:10.339184   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:20:30.821336   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-390917 start --wait true --alsologtostderr -v 5: (1m20.855153193s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 node delete m03 --alsologtostderr -v 5
E1123 08:21:11.783043   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-390917 node delete m03 --alsologtostderr -v 5: (9.657475975s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-390917 stop --alsologtostderr -v 5: (43.643856671s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-390917 status --alsologtostderr -v 5: exit status 7 (110.491346ms)

                                                
                                                
-- stdout --
	ha-390917
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-390917-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-390917-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:01.852980   94482 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:01.853185   94482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:01.853192   94482 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:01.853196   94482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:01.853389   94482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:22:01.853537   94482 out.go:368] Setting JSON to false
	I1123 08:22:01.853561   94482 mustload.go:66] Loading cluster: ha-390917
	I1123 08:22:01.853658   94482 notify.go:221] Checking for updates...
	I1123 08:22:01.853935   94482 config.go:182] Loaded profile config "ha-390917": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:01.853954   94482 status.go:174] checking status of ha-390917 ...
	I1123 08:22:01.854378   94482 cli_runner.go:164] Run: docker container inspect ha-390917 --format={{.State.Status}}
	I1123 08:22:01.874141   94482 status.go:371] ha-390917 host status = "Stopped" (err=<nil>)
	I1123 08:22:01.874159   94482 status.go:384] host is not running, skipping remaining checks
	I1123 08:22:01.874165   94482 status.go:176] ha-390917 status: &{Name:ha-390917 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:22:01.874186   94482 status.go:174] checking status of ha-390917-m02 ...
	I1123 08:22:01.874427   94482 cli_runner.go:164] Run: docker container inspect ha-390917-m02 --format={{.State.Status}}
	I1123 08:22:01.890967   94482 status.go:371] ha-390917-m02 host status = "Stopped" (err=<nil>)
	I1123 08:22:01.890984   94482 status.go:384] host is not running, skipping remaining checks
	I1123 08:22:01.890991   94482 status.go:176] ha-390917-m02 status: &{Name:ha-390917-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:22:01.891010   94482 status.go:174] checking status of ha-390917-m04 ...
	I1123 08:22:01.891226   94482 cli_runner.go:164] Run: docker container inspect ha-390917-m04 --format={{.State.Status}}
	I1123 08:22:01.906975   94482 status.go:371] ha-390917-m04 host status = "Stopped" (err=<nil>)
	I1123 08:22:01.907010   94482 status.go:384] host is not running, skipping remaining checks
	I1123 08:22:01.907029   94482 status.go:176] ha-390917-m04 status: &{Name:ha-390917-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1123 08:22:33.706243   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:22:36.839934   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-390917 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (54.804284554s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (69.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-390917 node add --control-plane --alsologtostderr -v 5: (1m8.896028893s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-390917 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (69.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-896921 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1123 08:24:49.846236   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-896921 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (37.965013703s)
--- PASS: TestJSONOutput/start/Command (37.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-896921 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-896921 --output=json --user=testUser: (6.038079922s)
--- PASS: TestJSONOutput/stop/Command (6.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-996940 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-996940 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.466533ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"47652fa4-ed8b-4f00-8500-38cf32d7b80b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-996940] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c5891db6-3c16-4914-9aba-3b77d4e219a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21966"}}
	{"specversion":"1.0","id":"b0979516-c5d4-4fb5-a777-931bf1600558","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"93dbeb6f-28e0-405a-af58-95a4d219bebd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig"}}
	{"specversion":"1.0","id":"2ef6b275-efdf-4bb7-9bdd-72abd8633a42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube"}}
	{"specversion":"1.0","id":"e96ee0a2-ea49-4f12-8873-4492be7c6ba0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ef82145b-0849-44aa-b65a-c2f8b4821572","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"679b6116-3a6f-4b07-83a1-1e5af2b41182","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-996940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-996940
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-004805 --network=
E1123 08:25:17.548081   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-004805 --network=: (27.269437669s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-004805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-004805
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-004805: (2.091738901s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.38s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-289566 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-289566 --network=bridge: (20.374466494s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-289566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-289566
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-289566: (1.955200562s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.35s)

                                                
                                    
x
+
TestKicExistingNetwork (27.61s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 08:26:01.610836   14488 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 08:26:01.626437   14488 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 08:26:01.626491   14488 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 08:26:01.626510   14488 cli_runner.go:164] Run: docker network inspect existing-network
W1123 08:26:01.641782   14488 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 08:26:01.641807   14488 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 08:26:01.641822   14488 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 08:26:01.641951   14488 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 08:26:01.657524   14488 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0e05b954e81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1e:02:f0:06:d5:34} reservation:<nil>}
I1123 08:26:01.657941   14488 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a4f410}
I1123 08:26:01.657975   14488 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 08:26:01.658020   14488 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1123 08:26:01.701622   14488 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-644710 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-644710 --network=existing-network: (25.526324281s)
helpers_test.go:175: Cleaning up "existing-network-644710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-644710
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-644710: (1.966806661s)
I1123 08:26:29.210487   14488 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (27.61s)

                                                
                                    
x
+
TestKicCustomSubnet (26.61s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-639720 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-639720 --subnet=192.168.60.0/24: (24.506752171s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-639720 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-639720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-639720
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-639720: (2.082690653s)
--- PASS: TestKicCustomSubnet (26.61s)

                                                
                                    
x
+
TestKicStaticIP (23.48s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-004113 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-004113 --static-ip=192.168.200.200: (21.237165753s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-004113 ip
helpers_test.go:175: Cleaning up "static-ip-004113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-004113
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-004113: (2.107548204s)
--- PASS: TestKicStaticIP (23.48s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.61s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-857631 --driver=docker  --container-runtime=crio
E1123 08:27:36.843340   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-857631 --driver=docker  --container-runtime=crio: (21.374456499s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-859875 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-859875 --driver=docker  --container-runtime=crio: (22.467517284s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-857631
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-859875
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-859875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-859875
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-859875: (2.285467888s)
helpers_test.go:175: Cleaning up "first-857631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-857631
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-857631: (2.285647626s)
--- PASS: TestMinikubeProfile (49.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-768999 --memory=3072 --mount-string /tmp/TestMountStartserial4191247865/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-768999 --memory=3072 --mount-string /tmp/TestMountStartserial4191247865/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.665931568s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-768999 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-780377 --memory=3072 --mount-string /tmp/TestMountStartserial4191247865/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-780377 --memory=3072 --mount-string /tmp/TestMountStartserial4191247865/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.676891691s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-780377 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-768999 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-768999 --alsologtostderr -v=5: (1.62731704s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-780377 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-780377
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-780377: (1.229122287s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-780377
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-780377: (6.31403004s)
--- PASS: TestMountStart/serial/RestartStopped (7.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-780377 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (88.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-463584 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1123 08:29:49.845381   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-463584 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m28.108763533s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (88.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-463584 -- rollout status deployment/busybox: (1.881011435s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- exec busybox-7b57f96db7-cgrrx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- exec busybox-7b57f96db7-frncj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- exec busybox-7b57f96db7-cgrrx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- exec busybox-7b57f96db7-frncj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- exec busybox-7b57f96db7-cgrrx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- exec busybox-7b57f96db7-frncj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- exec busybox-7b57f96db7-cgrrx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- exec busybox-7b57f96db7-cgrrx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- exec busybox-7b57f96db7-frncj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-463584 -- exec busybox-7b57f96db7-frncj -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-463584 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-463584 -v=5 --alsologtostderr: (22.733346276s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-463584 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp testdata/cp-test.txt multinode-463584:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp multinode-463584:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3707740964/001/cp-test_multinode-463584.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp multinode-463584:/home/docker/cp-test.txt multinode-463584-m02:/home/docker/cp-test_multinode-463584_multinode-463584-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m02 "sudo cat /home/docker/cp-test_multinode-463584_multinode-463584-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp multinode-463584:/home/docker/cp-test.txt multinode-463584-m03:/home/docker/cp-test_multinode-463584_multinode-463584-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m03 "sudo cat /home/docker/cp-test_multinode-463584_multinode-463584-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp testdata/cp-test.txt multinode-463584-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp multinode-463584-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3707740964/001/cp-test_multinode-463584-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp multinode-463584-m02:/home/docker/cp-test.txt multinode-463584:/home/docker/cp-test_multinode-463584-m02_multinode-463584.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584 "sudo cat /home/docker/cp-test_multinode-463584-m02_multinode-463584.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp multinode-463584-m02:/home/docker/cp-test.txt multinode-463584-m03:/home/docker/cp-test_multinode-463584-m02_multinode-463584-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m03 "sudo cat /home/docker/cp-test_multinode-463584-m02_multinode-463584-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp testdata/cp-test.txt multinode-463584-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp multinode-463584-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3707740964/001/cp-test_multinode-463584-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp multinode-463584-m03:/home/docker/cp-test.txt multinode-463584:/home/docker/cp-test_multinode-463584-m03_multinode-463584.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584 "sudo cat /home/docker/cp-test_multinode-463584-m03_multinode-463584.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 cp multinode-463584-m03:/home/docker/cp-test.txt multinode-463584-m02:/home/docker/cp-test_multinode-463584-m03_multinode-463584-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 ssh -n multinode-463584-m02 "sudo cat /home/docker/cp-test_multinode-463584-m03_multinode-463584-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-463584 node stop m03: (1.250028012s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-463584 status: exit status 7 (480.464928ms)

                                                
                                                
-- stdout --
	multinode-463584
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-463584-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-463584-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-463584 status --alsologtostderr: exit status 7 (483.764585ms)

                                                
                                                
-- stdout --
	multinode-463584
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-463584-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-463584-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:30:39.264240  154356 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:30:39.264463  154356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:30:39.264470  154356 out.go:374] Setting ErrFile to fd 2...
	I1123 08:30:39.264475  154356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:30:39.264645  154356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:30:39.264820  154356 out.go:368] Setting JSON to false
	I1123 08:30:39.264844  154356 mustload.go:66] Loading cluster: multinode-463584
	I1123 08:30:39.264918  154356 notify.go:221] Checking for updates...
	I1123 08:30:39.265213  154356 config.go:182] Loaded profile config "multinode-463584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:30:39.265236  154356 status.go:174] checking status of multinode-463584 ...
	I1123 08:30:39.265818  154356 cli_runner.go:164] Run: docker container inspect multinode-463584 --format={{.State.Status}}
	I1123 08:30:39.286680  154356 status.go:371] multinode-463584 host status = "Running" (err=<nil>)
	I1123 08:30:39.286734  154356 host.go:66] Checking if "multinode-463584" exists ...
	I1123 08:30:39.286969  154356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-463584
	I1123 08:30:39.304216  154356 host.go:66] Checking if "multinode-463584" exists ...
	I1123 08:30:39.304422  154356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:30:39.304465  154356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-463584
	I1123 08:30:39.320722  154356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/multinode-463584/id_rsa Username:docker}
	I1123 08:30:39.417234  154356 ssh_runner.go:195] Run: systemctl --version
	I1123 08:30:39.423088  154356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:30:39.434434  154356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:30:39.489960  154356 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-23 08:30:39.480572618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:30:39.490449  154356 kubeconfig.go:125] found "multinode-463584" server: "https://192.168.67.2:8443"
	I1123 08:30:39.490475  154356 api_server.go:166] Checking apiserver status ...
	I1123 08:30:39.490510  154356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:30:39.502201  154356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup
	W1123 08:30:39.509882  154356 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:30:39.509919  154356 ssh_runner.go:195] Run: ls
	I1123 08:30:39.513267  154356 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 08:30:39.517290  154356 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 08:30:39.517313  154356 status.go:463] multinode-463584 apiserver status = Running (err=<nil>)
	I1123 08:30:39.517324  154356 status.go:176] multinode-463584 status: &{Name:multinode-463584 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:30:39.517349  154356 status.go:174] checking status of multinode-463584-m02 ...
	I1123 08:30:39.517633  154356 cli_runner.go:164] Run: docker container inspect multinode-463584-m02 --format={{.State.Status}}
	I1123 08:30:39.534283  154356 status.go:371] multinode-463584-m02 host status = "Running" (err=<nil>)
	I1123 08:30:39.534299  154356 host.go:66] Checking if "multinode-463584-m02" exists ...
	I1123 08:30:39.534507  154356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-463584-m02
	I1123 08:30:39.549109  154356 host.go:66] Checking if "multinode-463584-m02" exists ...
	I1123 08:30:39.549459  154356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:30:39.549506  154356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-463584-m02
	I1123 08:30:39.565930  154356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21966-10964/.minikube/machines/multinode-463584-m02/id_rsa Username:docker}
	I1123 08:30:39.662347  154356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:30:39.673802  154356 status.go:176] multinode-463584-m02 status: &{Name:multinode-463584-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:30:39.673831  154356 status.go:174] checking status of multinode-463584-m03 ...
	I1123 08:30:39.674133  154356 cli_runner.go:164] Run: docker container inspect multinode-463584-m03 --format={{.State.Status}}
	I1123 08:30:39.691785  154356 status.go:371] multinode-463584-m03 host status = "Stopped" (err=<nil>)
	I1123 08:30:39.691804  154356 status.go:384] host is not running, skipping remaining checks
	I1123 08:30:39.691811  154356 status.go:176] multinode-463584-m03 status: &{Name:multinode-463584-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-463584 node start m03 -v=5 --alsologtostderr: (6.439343563s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (57.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-463584
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-463584
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-463584: (29.397088871s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-463584 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-463584 --wait=true -v=5 --alsologtostderr: (27.7535644s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-463584
--- PASS: TestMultiNode/serial/RestartKeepsNodes (57.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-463584 node delete m03: (4.36989606s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-463584 stop: (28.183420066s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-463584 status: exit status 7 (93.3102ms)

                                                
                                                
-- stdout --
	multinode-463584
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-463584-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-463584 status --alsologtostderr: exit status 7 (95.743859ms)

                                                
                                                
-- stdout --
	multinode-463584
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-463584-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:32:17.391744  163862 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:32:17.391993  163862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:17.392002  163862 out.go:374] Setting ErrFile to fd 2...
	I1123 08:32:17.392007  163862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:32:17.392254  163862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:32:17.392470  163862 out.go:368] Setting JSON to false
	I1123 08:32:17.392500  163862 mustload.go:66] Loading cluster: multinode-463584
	I1123 08:32:17.392615  163862 notify.go:221] Checking for updates...
	I1123 08:32:17.393032  163862 config.go:182] Loaded profile config "multinode-463584": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:32:17.393058  163862 status.go:174] checking status of multinode-463584 ...
	I1123 08:32:17.393638  163862 cli_runner.go:164] Run: docker container inspect multinode-463584 --format={{.State.Status}}
	I1123 08:32:17.414194  163862 status.go:371] multinode-463584 host status = "Stopped" (err=<nil>)
	I1123 08:32:17.414216  163862 status.go:384] host is not running, skipping remaining checks
	I1123 08:32:17.414224  163862 status.go:176] multinode-463584 status: &{Name:multinode-463584 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:32:17.414261  163862 status.go:174] checking status of multinode-463584-m02 ...
	I1123 08:32:17.414612  163862 cli_runner.go:164] Run: docker container inspect multinode-463584-m02 --format={{.State.Status}}
	I1123 08:32:17.432795  163862 status.go:371] multinode-463584-m02 host status = "Stopped" (err=<nil>)
	I1123 08:32:17.432819  163862 status.go:384] host is not running, skipping remaining checks
	I1123 08:32:17.432832  163862 status.go:176] multinode-463584-m02 status: &{Name:multinode-463584-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (31.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-463584 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1123 08:32:36.840621   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-463584 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (31.260859933s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-463584 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (31.83s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-463584
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-463584-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-463584-m02 --driver=docker  --container-runtime=crio: exit status 14 (71.024336ms)

                                                
                                                
-- stdout --
	* [multinode-463584-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-463584-m02' is duplicated with machine name 'multinode-463584-m02' in profile 'multinode-463584'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-463584-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-463584-m03 --driver=docker  --container-runtime=crio: (22.828362765s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-463584
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-463584: exit status 80 (283.750815ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-463584 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-463584-m03 already exists in multinode-463584-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-463584-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-463584-m03: (2.344058367s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.58s)

                                                
                                    
x
+
TestPreload (80.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-530874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-530874 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (44.513522647s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-530874 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-530874
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-530874: (5.917027591s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-530874 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-530874 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (26.374582991s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-530874 image list
helpers_test.go:175: Cleaning up "test-preload-530874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-530874
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-530874: (2.348311712s)
--- PASS: TestPreload (80.25s)

                                                
                                    
x
+
TestScheduledStopUnix (99.32s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-534501 --memory=3072 --driver=docker  --container-runtime=crio
E1123 08:34:49.846841   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-534501 --memory=3072 --driver=docker  --container-runtime=crio: (22.959759417s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-534501 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:35:02.193804  180662 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:35:02.193926  180662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:35:02.193937  180662 out.go:374] Setting ErrFile to fd 2...
	I1123 08:35:02.193944  180662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:35:02.194158  180662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:35:02.194419  180662 out.go:368] Setting JSON to false
	I1123 08:35:02.194506  180662 mustload.go:66] Loading cluster: scheduled-stop-534501
	I1123 08:35:02.194828  180662 config.go:182] Loaded profile config "scheduled-stop-534501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:35:02.194894  180662 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/config.json ...
	I1123 08:35:02.195063  180662 mustload.go:66] Loading cluster: scheduled-stop-534501
	I1123 08:35:02.195157  180662 config.go:182] Loaded profile config "scheduled-stop-534501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-534501 -n scheduled-stop-534501
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-534501 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:35:02.564507  180810 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:35:02.564752  180810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:35:02.564761  180810 out.go:374] Setting ErrFile to fd 2...
	I1123 08:35:02.564765  180810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:35:02.564951  180810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:35:02.565148  180810 out.go:368] Setting JSON to false
	I1123 08:35:02.565322  180810 daemonize_unix.go:73] killing process 180698 as it is an old scheduled stop
	I1123 08:35:02.565427  180810 mustload.go:66] Loading cluster: scheduled-stop-534501
	I1123 08:35:02.565907  180810 config.go:182] Loaded profile config "scheduled-stop-534501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:35:02.565992  180810 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/config.json ...
	I1123 08:35:02.566197  180810 mustload.go:66] Loading cluster: scheduled-stop-534501
	I1123 08:35:02.566330  180810 config.go:182] Loaded profile config "scheduled-stop-534501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 08:35:02.571106   14488 retry.go:31] will retry after 98.922µs: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.572267   14488 retry.go:31] will retry after 92.136µs: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.573406   14488 retry.go:31] will retry after 161.975µs: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.574547   14488 retry.go:31] will retry after 444.409µs: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.575673   14488 retry.go:31] will retry after 308.772µs: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.576792   14488 retry.go:31] will retry after 632.913µs: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.577909   14488 retry.go:31] will retry after 1.667278ms: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.580098   14488 retry.go:31] will retry after 1.389596ms: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.582283   14488 retry.go:31] will retry after 2.452526ms: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.585473   14488 retry.go:31] will retry after 5.065558ms: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.590598   14488 retry.go:31] will retry after 5.574772ms: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.596805   14488 retry.go:31] will retry after 6.064769ms: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.602942   14488 retry.go:31] will retry after 8.600833ms: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.612121   14488 retry.go:31] will retry after 22.663797ms: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.635354   14488 retry.go:31] will retry after 28.068884ms: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
I1123 08:35:02.663511   14488 retry.go:31] will retry after 57.295352ms: open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-534501 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-534501 -n scheduled-stop-534501
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-534501
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-534501 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:35:28.460755  181370 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:35:28.460994  181370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:35:28.461004  181370 out.go:374] Setting ErrFile to fd 2...
	I1123 08:35:28.461008  181370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:35:28.461212  181370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:35:28.461438  181370 out.go:368] Setting JSON to false
	I1123 08:35:28.461511  181370 mustload.go:66] Loading cluster: scheduled-stop-534501
	I1123 08:35:28.461796  181370 config.go:182] Loaded profile config "scheduled-stop-534501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:35:28.461858  181370 profile.go:143] Saving config to /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/scheduled-stop-534501/config.json ...
	I1123 08:35:28.462033  181370 mustload.go:66] Loading cluster: scheduled-stop-534501
	I1123 08:35:28.462117  181370 config.go:182] Loaded profile config "scheduled-stop-534501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
E1123 08:35:39.912601   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/addons-959783/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1123 08:36:12.911998   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-534501
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-534501: exit status 7 (76.954112ms)

                                                
                                                
-- stdout --
	scheduled-stop-534501
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-534501 -n scheduled-stop-534501
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-534501 -n scheduled-stop-534501: exit status 7 (73.681033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-534501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-534501
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-534501: (4.879263402s)
--- PASS: TestScheduledStopUnix (99.32s)

                                                
                                    
x
+
TestInsufficientStorage (12.24s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-164744 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-164744 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.782574706s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d63a52b0-d3c3-4872-8273-ff22c22fb839","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-164744] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"20e5c3e6-9e7f-4ae0-80ca-c38f60859474","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21966"}}
	{"specversion":"1.0","id":"b2b1fb77-fa40-4e74-a7e3-4ef884314ab1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8d61fc99-726b-440c-a4e4-16d32211bd41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig"}}
	{"specversion":"1.0","id":"b303efb7-34c2-4154-81e5-ecd71a69cb23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube"}}
	{"specversion":"1.0","id":"3c062c5a-945c-462c-8f6c-8d328b228709","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d9709d0f-0c22-4a3d-8bd3-16f5d42c610e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"21e92d1e-d464-42aa-82e8-04de25e28a9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"74a51579-fcc2-42d3-ac08-744c5cffbb10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2e5e8acc-c1e3-4e0d-8e8b-c6cca9f0a6e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5560e660-3a05-47a2-8709-6de0fb9c314e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"aeb14226-c496-4410-8825-9434f660916b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-164744\" primary control-plane node in \"insufficient-storage-164744\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0585023e-a55f-4ace-bbc8-7856ada0e62f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6bf09529-094c-4b35-8875-8bac93aa4287","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"562dfce2-15f6-45c5-926b-77bfe9083d2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-164744 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-164744 --output=json --layout=cluster: exit status 7 (285.29438ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-164744","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-164744","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:36:28.551592  183896 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-164744" does not appear in /home/jenkins/minikube-integration/21966-10964/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-164744 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-164744 --output=json --layout=cluster: exit status 7 (284.742792ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-164744","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-164744","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:36:28.837281  184007 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-164744" does not appear in /home/jenkins/minikube-integration/21966-10964/kubeconfig
	E1123 08:36:28.847208  184007 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/insufficient-storage-164744/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-164744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-164744
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-164744: (1.882738117s)
--- PASS: TestInsufficientStorage (12.24s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1884799962 start -p running-upgrade-470083 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1884799962 start -p running-upgrade-470083 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.633678237s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-470083 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-470083 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.642640608s)
helpers_test.go:175: Cleaning up "running-upgrade-470083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-470083
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-470083: (2.416440497s)
--- PASS: TestRunningBinaryUpgrade (47.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (306.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-930169 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-930169 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.72595023s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-930169
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-930169: (2.016955277s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-930169 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-930169 status --format={{.Host}}: exit status 7 (95.916029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-930169 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-930169 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m23.184964033s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-930169 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-930169 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-930169 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (90.693817ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-930169] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-930169
	    minikube start -p kubernetes-upgrade-930169 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9301692 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-930169 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-930169 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-930169 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.198894512s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-930169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-930169
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-930169: (2.789152469s)
--- PASS: TestKubernetesUpgrade (306.17s)

                                                
                                    
x
+
TestMissingContainerUpgrade (97.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1834536148 start -p missing-upgrade-603678 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1834536148 start -p missing-upgrade-603678 --memory=3072 --driver=docker  --container-runtime=crio: (51.622612985s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-603678
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-603678: (1.811553882s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-603678
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-603678 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-603678 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.786332768s)
helpers_test.go:175: Cleaning up "missing-upgrade-603678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-603678
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-603678: (2.359681085s)
--- PASS: TestMissingContainerUpgrade (97.99s)

                                                
                                    
x
+
TestPause/serial/Start (81.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-716098 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-716098 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.060281735s)
--- PASS: TestPause/serial/Start (81.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-840508 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-840508 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (77.362638ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-840508] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (21.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-840508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-840508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.093463843s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-840508 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (21.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-840508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-840508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (15.883027768s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-840508 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-840508 status -o json: exit status 2 (324.35509ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-840508","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-840508
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-840508: (1.972752454s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-351793 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-351793 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (155.370737ms)

                                                
                                                
-- stdout --
	* [false-351793] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:37:28.082665  199440 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:37:28.082765  199440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:37:28.082773  199440 out.go:374] Setting ErrFile to fd 2...
	I1123 08:37:28.082777  199440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:37:28.082934  199440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21966-10964/.minikube/bin
	I1123 08:37:28.083380  199440 out.go:368] Setting JSON to false
	I1123 08:37:28.084353  199440 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4795,"bootTime":1763882253,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:37:28.084411  199440 start.go:143] virtualization: kvm guest
	I1123 08:37:28.086134  199440 out.go:179] * [false-351793] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:37:28.087138  199440 notify.go:221] Checking for updates...
	I1123 08:37:28.088094  199440 out.go:179]   - MINIKUBE_LOCATION=21966
	I1123 08:37:28.089396  199440 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:37:28.090539  199440 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21966-10964/kubeconfig
	I1123 08:37:28.091527  199440 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21966-10964/.minikube
	I1123 08:37:28.092498  199440 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:37:28.094408  199440 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:37:28.095983  199440 config.go:182] Loaded profile config "NoKubernetes-840508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1123 08:37:28.096070  199440 config.go:182] Loaded profile config "cert-expiration-747782": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:37:28.096160  199440 config.go:182] Loaded profile config "pause-716098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:37:28.096228  199440 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:37:28.117957  199440 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:37:28.118030  199440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:37:28.175876  199440 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:37:28.166284238 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8 ::1/128] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:37:28.175994  199440 docker.go:319] overlay module found
	I1123 08:37:28.177491  199440 out.go:179] * Using the docker driver based on user configuration
	I1123 08:37:28.178645  199440 start.go:309] selected driver: docker
	I1123 08:37:28.178656  199440 start.go:927] validating driver "docker" against <nil>
	I1123 08:37:28.178666  199440 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:37:28.180142  199440 out.go:203] 
	W1123 08:37:28.181164  199440 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1123 08:37:28.182106  199440 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-351793 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-351793" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-351793" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-840508
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-747782
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-716098
contexts:
- context:
cluster: NoKubernetes-840508
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-840508
name: NoKubernetes-840508
- context:
cluster: cert-expiration-747782
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-747782
name: cert-expiration-747782
- context:
cluster: pause-716098
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-716098
name: pause-716098
current-context: NoKubernetes-840508
kind: Config
users:
- name: NoKubernetes-840508
user:
client-certificate: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/NoKubernetes-840508/client.crt
client-key: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/NoKubernetes-840508/client.key
- name: cert-expiration-747782
user:
client-certificate: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/cert-expiration-747782/client.crt
client-key: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/cert-expiration-747782/client.key
- name: pause-716098
user:
client-certificate: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/pause-716098/client.crt
client-key: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/pause-716098/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-351793

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-351793"

                                                
                                                
----------------------- debugLogs end: false-351793 [took: 2.93487767s] --------------------------------
helpers_test.go:175: Cleaning up "false-351793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-351793
--- PASS: TestNetworkPlugins/group/false (3.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-840508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-840508 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.159662034s)
--- PASS: TestNoKubernetes/serial/Start (4.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21966-10964/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-840508 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-840508 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.225289ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-840508
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-840508: (1.269860584s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-840508 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-840508 --driver=docker  --container-runtime=crio: (6.528476456s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-716098 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-716098 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.021440303s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-840508 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-840508 "sudo systemctl is-active --quiet service kubelet": exit status 1 (333.028057ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (76.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1473753643 start -p stopped-upgrade-430008 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1473753643 start -p stopped-upgrade-430008 --memory=3072 --vm-driver=docker  --container-runtime=crio: (59.115625447s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1473753643 -p stopped-upgrade-430008 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1473753643 -p stopped-upgrade-430008 stop: (1.933219446s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-430008 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-430008 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.122396776s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (76.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-430008
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (38.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1123 08:39:49.845707   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (38.921519309s)
--- PASS: TestNetworkPlugins/group/auto/Start (38.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m12.546149524s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.348033395s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-351793 "pgrep -a kubelet"
I1123 08:40:26.917581   14488 config.go:182] Loaded profile config "auto-351793": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-351793 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-slnlt" [39cb136d-09f2-4652-9fa2-b1ad0d835a98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-slnlt" [39cb136d-09f2-4652-9fa2-b1ad0d835a98] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003709911s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-351793 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.304055044s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-zzprw" [9b4d5c2c-8ffc-4117-b5aa-59cca4fe7111] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-zzprw" [9b4d5c2c-8ffc-4117-b5aa-59cca4fe7111] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00319877s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-351793 "pgrep -a kubelet"
I1123 08:41:12.422635   14488 config.go:182] Loaded profile config "calico-351793": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-351793 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t29tf" [1d3400ce-aceb-4aae-9b03-f4d7b2bd6001] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t29tf" [1d3400ce-aceb-4aae-9b03-f4d7b2bd6001] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003664843s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-351793 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-8mt5x" [587b2d04-3b37-4bce-a24e-3d764f6f695d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00388636s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-351793 "pgrep -a kubelet"
I1123 08:41:28.479015   14488 config.go:182] Loaded profile config "kindnet-351793": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-351793 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h89vf" [5dab9377-9667-43fe-ae6d-adffd7b7ea05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h89vf" [5dab9377-9667-43fe-ae6d-adffd7b7ea05] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003507994s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-351793 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (42.736352164s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-351793 "pgrep -a kubelet"
I1123 08:41:46.617555   14488 config.go:182] Loaded profile config "custom-flannel-351793": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-351793 replace --force -f testdata/netcat-deployment.yaml
I1123 08:41:46.925447   14488 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jtw9v" [e8fd9f02-9c06-4cac-ae63-e93549690b4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jtw9v" [e8fd9f02-9c06-4cac-ae63-e93549690b4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003881222s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-351793 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.079652248s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-351793 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m9.42989167s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-351793 "pgrep -a kubelet"
I1123 08:42:24.602116   14488 config.go:182] Loaded profile config "enable-default-cni-351793": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-351793 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cbktv" [2e2ac8a2-fd4b-49a5-91ad-dea7e8faa104] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cbktv" [2e2ac8a2-fd4b-49a5-91ad-dea7e8faa104] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.002920566s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-351793 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-k2f5q" [bbe2ca0d-61b5-4b91-bf06-2429813ce8fd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.002663276s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-057894 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-057894 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.114100734s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-351793 "pgrep -a kubelet"
I1123 08:42:56.898162   14488 config.go:182] Loaded profile config "flannel-351793": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-351793 replace --force -f testdata/netcat-deployment.yaml
I1123 08:42:57.168057   14488 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1123 08:42:57.454227   14488 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-blsbq" [f74bb52b-f4e1-4ae3-bf2a-c132037ee704] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-blsbq" [f74bb52b-f4e1-4ae3-bf2a-c132037ee704] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003542748s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-351793 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-351793 "pgrep -a kubelet"
I1123 08:43:27.399069   14488 config.go:182] Loaded profile config "bridge-351793": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-351793 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bxlnn" [431295a5-5455-4902-8cb5-33fde59f821a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bxlnn" [431295a5-5455-4902-8cb5-33fde59f821a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004328805s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.641640527s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.525116735s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-351793 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-351793 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E1123 08:45:27.110251   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:45:27.116574   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:45:27.127888   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:45:27.149186   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:45:27.190492   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:45:27.272080   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:45:27.433601   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:45:27.755591   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:45:28.397447   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:45:29.679675   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:45:32.241903   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:45:37.363802   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-057894 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3dff7874-bfd3-4630-aa6d-acede64007db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3dff7874-bfd3-4630-aa6d-acede64007db] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003806718s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-057894 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (17.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-057894 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-057894 --alsologtostderr -v=3: (17.838171483s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (17.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (25.717386061s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-057894 -n old-k8s-version-057894
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-057894 -n old-k8s-version-057894: exit status 7 (87.848803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-057894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-057894 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-057894 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (43.379082066s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-057894 -n old-k8s-version-057894
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-726261 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8d5f0f49-e259-488e-9b83-b51330a2bfdd] Pending
helpers_test.go:352: "busybox" [8d5f0f49-e259-488e-9b83-b51330a2bfdd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8d5f0f49-e259-488e-9b83-b51330a2bfdd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004018953s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-726261 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-187607 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7ac9322d-8d47-4118-be2a-c9e6190f248c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7ac9322d-8d47-4118-be2a-c9e6190f248c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003928624s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-187607 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-653361 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-653361 --alsologtostderr -v=3: (12.656545164s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-726261 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-726261 --alsologtostderr -v=3: (18.177264927s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-187607 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-187607 --alsologtostderr -v=3: (16.325483719s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653361 -n newest-cni-653361
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653361 -n newest-cni-653361: exit status 7 (94.645787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-653361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 08:44:49.845168   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/functional-762247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-653361 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.673973403s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-653361 -n newest-cni-653361
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261: exit status 7 (92.830517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-726261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-726261 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.563693015s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-726261 -n default-k8s-diff-port-726261
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-187607 -n no-preload-187607
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-187607 -n no-preload-187607: exit status 7 (96.74232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-187607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-187607 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.80041642s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-187607 -n no-preload-187607
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-653361 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rlnf7" [0171abf9-abe8-4871-8715-2ece3d41ce1a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003504477s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.149742659s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-rlnf7" [0171abf9-abe8-4871-8715-2ece3d41ce1a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005129338s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-057894 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-057894 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fnxnm" [01fb6bc8-9147-4bd4-8515-54325b5f4163] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003086102s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c25qj" [e3ba63d6-632b-470c-8554-66f566a1a351] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003335563s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-756339 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9d266def-c91d-4fd0-b04a-42a6fd90082f] Pending
helpers_test.go:352: "busybox" [9d266def-c91d-4fd0-b04a-42a6fd90082f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9d266def-c91d-4fd0-b04a-42a6fd90082f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003503772s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-756339 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fnxnm" [01fb6bc8-9147-4bd4-8515-54325b5f4163] Running
E1123 08:45:47.605262   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003716377s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-726261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c25qj" [e3ba63d6-632b-470c-8554-66f566a1a351] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002753355s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-187607 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-726261 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-187607 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-756339 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-756339 --alsologtostderr -v=3: (16.35748283s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756339 -n embed-certs-756339
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756339 -n embed-certs-756339: exit status 7 (75.23269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-756339 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (25.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 08:46:16.379645   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/calico-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:22.153816   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:22.160206   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:22.171562   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:22.192987   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:22.234361   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:22.315798   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:22.477301   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:22.799438   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:23.441399   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:24.723513   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:26.621093   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/calico-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:27.285372   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:32.407445   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-756339 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (24.978930061s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756339 -n embed-certs-756339
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (25.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zs7hv" [068df84f-d0fd-4037-a87f-270fb7ce8b9c] Running
E1123 08:46:42.649737   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/kindnet-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002803978s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zs7hv" [068df84f-d0fd-4037-a87f-270fb7ce8b9c] Running
E1123 08:46:46.917107   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/custom-flannel-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:46.923424   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/custom-flannel-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:46.934741   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/custom-flannel-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:46.956053   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/custom-flannel-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:46.997369   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/custom-flannel-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:47.078726   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/custom-flannel-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:47.103098   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/calico-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:47.240905   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/custom-flannel-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:47.562573   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/custom-flannel-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:48.204770   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/custom-flannel-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:46:49.048853   14488 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/auto-351793/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002646476s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-756339 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-756339 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-351793 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-351793" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-351793" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-840508
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-747782
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-716098
contexts:
- context:
cluster: NoKubernetes-840508
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-840508
name: NoKubernetes-840508
- context:
cluster: cert-expiration-747782
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-747782
name: cert-expiration-747782
- context:
cluster: pause-716098
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-716098
name: pause-716098
current-context: NoKubernetes-840508
kind: Config
users:
- name: NoKubernetes-840508
user:
client-certificate: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/NoKubernetes-840508/client.crt
client-key: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/NoKubernetes-840508/client.key
- name: cert-expiration-747782
user:
client-certificate: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/cert-expiration-747782/client.crt
client-key: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/cert-expiration-747782/client.key
- name: pause-716098
user:
client-certificate: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/pause-716098/client.crt
client-key: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/pause-716098/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-351793

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-351793"

                                                
                                                
----------------------- debugLogs end: kubenet-351793 [took: 3.028778959s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-351793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-351793
--- SKIP: TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-351793 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-351793" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-840508
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-747782
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21966-10964/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-716098
contexts:
- context:
cluster: NoKubernetes-840508
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-840508
name: NoKubernetes-840508
- context:
cluster: cert-expiration-747782
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-747782
name: cert-expiration-747782
- context:
cluster: pause-716098
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:37:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-716098
name: pause-716098
current-context: NoKubernetes-840508
kind: Config
users:
- name: NoKubernetes-840508
user:
client-certificate: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/NoKubernetes-840508/client.crt
client-key: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/NoKubernetes-840508/client.key
- name: cert-expiration-747782
user:
client-certificate: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/cert-expiration-747782/client.crt
client-key: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/cert-expiration-747782/client.key
- name: pause-716098
user:
client-certificate: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/pause-716098/client.crt
client-key: /home/jenkins/minikube-integration/21966-10964/.minikube/profiles/pause-716098/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-351793

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-351793" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-351793"

                                                
                                                
----------------------- debugLogs end: cilium-351793 [took: 3.254745716s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-351793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-351793
--- SKIP: TestNetworkPlugins/group/cilium (3.41s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-177890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-177890
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard