這篇文章主要講解了“Kubernetes中Node異常時Pod狀態是怎樣的”,文中的講解內容簡單清晰,易于學習與理解,下面請大家跟著小編的思路慢慢深入,一起來研究和學習“Kubernetes中Node異常時Pod狀態是怎樣的”吧!

成都創新互聯公司主營延吉網站建設的網絡公司,主營網站建設方案,成都app開發,延吉h5微信平臺小程序開發搭建,延吉網站營銷推廣歡迎延吉等地區企業咨詢
一個節點上運行著pod前提下,這個時候把kubelet進程停掉。里面的pod會被干掉嗎?會在其他節點recreate嗎?
結論:
(1)Node狀態變為NotReady (2)Pod 5分鐘之內狀態無變化,5分鐘之后的狀態變化:Daemonset的Pod狀態變為Nodelost,Deployment、Statefulset和Static Pod的狀態先變為NodeLost,然后馬上變為Unknown。Deployment的pod會recreate,但是Deployment如果是node selector停掉kubelet的node,則recreate的pod會一直處于Pending的狀態。Static Pod和Statefulset的Pod會一直處于Unknown狀態。
如果kubelet 10分鐘后又起來了,node和pod會怎樣?
結論:
(1)Node狀態變為Ready。 (2)Daemonset的pod不會recreate,舊pod狀態直接變為Running。 (3)Deployment的則是將kubelet進程停止的Node刪除(原因可能是因為舊Pod狀態在集群中有變化,但是Pod狀態在變化時發現集群中Deployment的Pod實例數已經夠了,所以對舊Pod做了刪除處理) (4)Statefulset的Pod會重新recreate。 (5)Staic Pod沒有重啟,但是Pod的運行時間會在kubelet起來的時候置為0。
在kubelet停止后,statefulset的pod會變成nodelost,接著就變成unknown,但是不會重啟,然后等kubelet起來后,statefulset的pod才會recreate。
還有一個就是Static Pod在kubelet重啟以后應該沒有重啟,但是集群中查詢Static Pod的狀態時,Static Pod的運行時間變了
Node down后,StatefulSet Pods並沒有重建,為什麼?
我們在node controller中發現,除了daemonset pods外,都會調用delete pod api刪除pod。
但并不是調用了delete pod api就會從apiserver/etcd中刪除pod object,僅僅是設置pod 的deletionTimestamp,標記該pod要被刪除。真正刪除Pod的行為是kubelet,kubelet grace terminate該pod后去真正刪除pod object。這個時候statefulset controller 發現某個replica缺失就會去recreate這個pod。
但此時由于kubelet掛了,無法與master通信,導致Pod Object一直無法從etcd中刪除。如果能成功刪除Pod Object,就可以在其他Node重建Pod。
另外,要注意,statefulset只會針對isFailed Pod,(但現在Pods是Unkown狀態)才會去delete Pod。
// delete and recreate failed pods
if isFailed(replicas[I]) {
ssc.recorder.Eventf(set, v1.EventTypeWarning, "RecreatingFailedPod",
"StatefulSetPlus %s/%s is recreating failed Pod %s",
set.Namespace,
set.Name,
replicas[I].Name)
if err := ssc.podControl.DeleteStatefulPlusPod(set, replicas[I]); err != nil {
return &status, err
}
if getPodRevision(replicas[I]) == currentRevision.Name {
status.CurrentReplicas—
}
if getPodRevision(replicas[I]) == updateRevision.Name {
status.UpdatedReplicas—
}
status.Replicas—
replicas[I] = newVersionedStatefulSetPlusPod(
currentSet,
updateSet,
currentRevision.Name,
updateRevision.Name,
i)
}所以針對node異常的情況,有狀態應用(Non-Quorum)的保障,應該補充以下行為:
監測node的網絡、kubelet進程、操作系統等是否異常,區別對待。
比如,如果是網絡異常,Pod無法正常提供服務,那么需要kubectl delete pod -f —grace-period=0進行強制從etcd中刪除該pod。
強制刪除后,statefulset controller就會自動觸發在其他Node上recreate pod。
亦或者,更粗暴的方法,就是放棄GracePeriodSeconds,StatefulSet Pod GracePeriodSeconds為nil或者0,則就會直接從etcd中刪除該object。
// BeforeDelete tests whether the object can be gracefully deleted.
// If graceful is set, the object should be gracefully deleted. If gracefulPending
// is set, the object has already been gracefully deleted (and the provided grace
// period is longer than the time to deletion). An error is returned if the
// condition cannot be checked or the gracePeriodSeconds is invalid. The options
// argument may be updated with default values if graceful is true. Second place
// where we set deletionTimestamp is pkg/registry/generic/registry/store.go.
// This function is responsible for setting deletionTimestamp during gracefulDeletion,
// other one for cascading deletions.
func BeforeDelete(strategy RESTDeleteStrategy, ctx context.Context, obj runtime.Object, options *metav1.DeleteOptions) (graceful, gracefulPending bool, err error) {
objectMeta, gvk, kerr := objectMetaAndKind(strategy, obj)
if kerr != nil {
return false, false, kerr
}
if errs := validation.ValidateDeleteOptions(options); len(errs) > 0 {
return false, false, errors.NewInvalid(schema.GroupKind{Group: metav1.GroupName, Kind: "DeleteOptions"}, "", errs)
}
// Checking the Preconditions here to fail early. They'll be enforced later on when we actually do the deletion, too.
if options.Preconditions != nil && options.Preconditions.UID != nil && *options.Preconditions.UID != objectMeta.GetUID() {
return false, false, errors.NewConflict(schema.GroupResource{Group: gvk.Group, Resource: gvk.Kind}, objectMeta.GetName(), fmt.Errorf("the UID in the precondition (%s) does not match the UID in record (%s). The object might have been deleted and then recreated", *options.Preconditions.UID, objectMeta.GetUID()))
}
gracefulStrategy, ok := strategy.(RESTGracefulDeleteStrategy)
if !ok {
// If we're not deleting gracefully there's no point in updating Generation, as we won't update
// the obcject before deleting it.
return false, false, nil
}
// if the object is already being deleted, no need to update generation.
if objectMeta.GetDeletionTimestamp() != nil {
// if we are already being deleted, we may only shorten the deletion grace period
// this means the object was gracefully deleted previously but deletionGracePeriodSeconds was not set,
// so we force deletion immediately
// IMPORTANT:
// The deletion operation happens in two phases.
// 1. Update to set DeletionGracePeriodSeconds and DeletionTimestamp
// 2. Delete the object from storage.
// If the update succeeds, but the delete fails (network error, internal storage error, etc.),
// a resource was previously left in a state that was non-recoverable. We
// check if the existing stored resource has a grace period as 0 and if so
// attempt to delete immediately in order to recover from this scenario.
if objectMeta.GetDeletionGracePeriodSeconds() == nil || *objectMeta.GetDeletionGracePeriodSeconds() == 0 {
return false, false, nil
}
...
}
...
} 感謝各位的閱讀,以上就是“Kubernetes中Node異常時Pod狀態是怎樣的”的內容了,經過本文的學習后,相信大家對Kubernetes中Node異常時Pod狀態是怎樣的這一問題有了更深刻的體會,具體使用情況還需要大家實踐驗證。這里是創新互聯,小編將為大家推送更多相關知識點的文章,歡迎關注!
名稱欄目:Kubernetes中Node異常時Pod狀態是怎樣的
標題路徑:http://www.yijiale78.com/article36/pdsgpg.html
成都網站建設公司_創新互聯,為您提供營銷型網站建設、網站排名、網站建設、定制網站、面包屑導航、標簽優化
聲明:本網站發布的內容(圖片、視頻和文字)以用戶投稿、用戶轉載內容為主,如果涉及侵權請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網站立場,如需處理請聯系客服。電話:028-86922220;郵箱:631063699@qq.com。內容未經允許不得轉載,或轉載時需注明來源: 創新互聯