一文了解 etcd lease 实现

284次阅读  |  发布于3年以前

大家都知道 etcd lease, 给一个 key 设置 ttl, 与 redis expire 很像,过期删除 key, 也可以续租约。但是内部实现原理完全不一样。

redis 是惰性删除,当访问 key 或是 cron 检测过期时由 master 删除。同时,redis slave 的数据由 master replication 过来的 delete 命令删除,并不会主动去做。

本篇我们来深入看一下 etcd lease 的实现,以及与 raft 的交互。不想看代码的直接看总结的几点:

  1. ttl 过期使用小顶堆,类似 golang timer 的实现,不过直接使用 go heap library, 由于是单堆实现,所以量大后精度肯定有问题,好在 ttl 一般都是秒级
  2. 除了 Renew 续租,其它的 Grant, Expire 等等操作都要经过一次 raft 协议广播
  3. 和 redis 一样,只有 leader 节点负责处理数据过期,follower 不会主动过期
  4. 租约定期做 Checkpoint, 将 remainingTTL 同步给其它 follower, 用于 failover. 也不知为啥叫 Checkpoint, 有点不易理解
  5. 租约信息单独存放在 boltdb bucket 里,并且不保存关联 key, 服务重启时从 keyBucketName 里恢复

经典使用

与 redis 不同,etcd lease 需要单独创建,才可以与 key 结合。

# grant a lease with 500 second TTL
$ etcdctl lease grant 500
lease 694d5765fc71500b granted with TTL(500s)

# attach key zoo1 to lease 694d5765fc71500b
$ etcdctl put zoo1 val1 --lease=694d5765fc71500b
OK

# attach key zoo2 to lease 694d5765fc71500b
$ etcdctl put zoo2 val2 --lease=694d5765fc71500b
OK

上面创建一个 500s ttl 的 lease, 然后可以 attach 到多个 key 上,当 lease 过期这些 key 会自动删除。关于更多的 revoke, keepalive 操作可以参考 官方 leases 文档[1]

接口

etcdsever 目录定义了 Lessor 接口的实现,具体实现在工程的 lease[2] 下面

type Lessor interface {
 // LeaseGrant sends LeaseGrant request to raft and apply it after committed.
 LeaseGrant(ctx context.Context, r *pb.LeaseGrantRequest) (*pb.LeaseGrantResponse, error)
 // LeaseRevoke sends LeaseRevoke request to raft and apply it after committed.
 LeaseRevoke(ctx context.Context, r *pb.LeaseRevokeRequest) (*pb.LeaseRevokeResponse, error)
 // LeaseRenew renews the lease with given ID. The renewed TTL is returned. Or an error
 // is returned.
 LeaseRenew(ctx context.Context, id lease.LeaseID) (int64, error)
 // LeaseTimeToLive retrieves lease information.
 LeaseTimeToLive(ctx context.Context, r *pb.LeaseTimeToLiveRequest) (*pb.LeaseTimeToLiveResponse, error)
 // LeaseLeases lists all leases.
 LeaseLeases(ctx context.Context, r *pb.LeaseLeasesRequest) (*pb.LeaseLeasesResponse, error)
}

创建租约

客户端请求进来,先走 v3rpc[3], 然后调用 etcdserver.EtcdServerLeaseGrant 函数

func (s *EtcdServer) LeaseGrant(ctx context.Context, r *pb.LeaseGrantRequest) (*pb.LeaseGrantResponse, error) {
 // no id given? choose one
 for r.ID == int64(lease.NoLease) {
  // only use positive int64 id's
  r.ID = int64(s.reqIDGen.Next() & ((1 << 63) - 1))
 }
 resp, err := s.raftRequestOnce(ctx, pb.InternalRaftRequest{LeaseGrant: r})
 if err != nil {
  return nil, err
 }
 return resp.(*pb.LeaseGrantResponse), nil
}

如果没有指定 lease id 的话,从 reqIDGen 生成器中获取一个,然后走 raft 协议,关于 etcd raft 实现以后会分析。当过半接收 raft log 后,applierv3 最终调用 Lessor Grant[4]

func (le *lessor) Grant(id LeaseID, ttl int64) (*Lease, error) {
 if id == NoLease {
  return nil, ErrLeaseNotFound
 }

 if ttl > MaxLeaseTTL {
  return nil, ErrLeaseTTLTooLarge
 }

 // TODO: when lessor is under high load, it should give out lease
 // with longer TTL to reduce renew load.
 l := &Lease{
  ID:      id,
  ttl:     ttl,
  itemSet: make(map[LeaseItem]struct{}),
  revokec: make(chan struct{}),
 }

 le.mu.Lock()
 defer le.mu.Unlock()

 if _, ok := le.leaseMap[id]; ok {
  return nil, ErrLeaseExists
 }

 if l.ttl < le.minLeaseTTL {
  l.ttl = le.minLeaseTTL
 }

 if le.isPrimary() {
  l.refresh(0)
 } else {
  l.forever()
 }

 le.leaseMap[id] = l
 item := &LeaseWithTime{id: l.ID, time: l.expiry.UnixNano()}
 le.leaseExpiredNotifier.RegisterOrUpdate(item)
 l.persistTo(le.b)

 leaseTotalTTLs.Observe(float64(l.ttl))
 leaseGranted.Inc()

 if le.isPrimary() {
  le.scheduleCheckpointIfNeeded(l)
 }

 return l, nil
}
  1. 判断 id, ttl 是否合法
  2. 创建 Lease 结构体,每一个租约一个结构体
  3. 在 leaseMap 中查找是否己经有同一个 leasor 了,不能重复创建
  4. isPrimary 检查当前 etcd 是不是 leader 节点,refresh 函数用于设置过期时间 TTL,而 forever 设置为永远不删除的意思,所以 etcd lease 只有 leader 是过期的,follower 永远不过期
  5. 将该 Lease 更新到 leaseExpiredNotifier 中,这是一个小顶堆,也就是所谓的优先队列,有另一个 goroutine 负责检测是否过期,follower 这个不起作用
  6. persistTo 将租约信息持久化到 boltdb, 这里只包括 leaseid, ttl, remainingTTL 并不包括被关联的 key 信息
  7. 符合条件的话,leader 将会调用 scheduleCheckpointIfNeeded 更新 checkpointHeap, 关于什么是 checkpoint 后面会讲

关联

创建完 lease 之后,需要与 key 相关连起来,实际上就是 put 操作,源码实现经过 raft 后,由 applierv3 调用 put[5] 实现,代码较长

func (a *applierV3backend) Put(txn mvcc.TxnWrite, p *pb.PutRequest) (resp *pb.PutResponse, trace *traceutil.Trace, err error) {
 resp = &pb.PutResponse{}
  ......

 resp.Header.Revision = txn.Put(p.Key, val, leaseID)
 trace.AddField(traceutil.Field{Key: "response_revision", Value: resp.Header.Revision})
 return resp, trace, nil
}

真正发生关联的在最后 txn.Put 实现, 我们来看 storeTxnWrite.Put[6]

func (tw *storeTxnWrite) put(key, value []byte, leaseID lease.LeaseID) {
 rev := tw.beginRev + 1
 c := rev
 oldLease := lease.NoLease

 // if the key exists before, use its previous created and
 // get its previous leaseID
 _, created, ver, err := tw.s.kvindex.Get(key, rev)
 if err == nil {
  c = created.main
  oldLease = tw.s.le.GetLease(lease.LeaseItem{Key: string(key)})
 }
  ......

 ver = ver + 1
 kv := mvccpb.KeyValue{
  Key:            key,
  Value:          value,
  CreateRevision: c,
  ModRevision:    rev,
  Version:        ver,
  Lease:          int64(leaseID),
 }
  ......

 if oldLease != lease.NoLease {
  if tw.s.le == nil {
   panic("no lessor to detach lease")
  }
  err = tw.s.le.Detach(oldLease, []lease.LeaseItem{{Key: string(key)}})
  ......
 }
 if leaseID != lease.NoLease {
  if tw.s.le == nil {
   panic("no lessor to attach lease")
  }
  err = tw.s.le.Attach(leaseID, []lease.LeaseItem{{Key: string(key)}})
  if err != nil {
   panic("unexpected error from lease Attach")
  }
 }
 tw.trace.Step("attach lease to kv pair")
}

去掉不相关的逻辑,只看 lease 相关部分

  1. 先获取 oldLease, 因为这个 key 可能以前也关联过,构建 mvccpb.KeyValue 时携带 lease id, 会持久化到 boltdb
  2. 如果 oldLease 有效的话,那么要先 Detach 取消关联
  3. 最后调用 Attach 关联 key 与新的 lease
func (le *lessor) Attach(id LeaseID, items []LeaseItem) error {
 le.mu.Lock()
 defer le.mu.Unlock()

 l := le.leaseMap[id]
 if l == nil {
  return ErrLeaseNotFound
 }

 l.mu.Lock()
 for _, it := range items {
  l.itemSet[it] = struct{}{}
  le.itemMap[it] = id
 }
 l.mu.Unlock()
 return nil
}

那么实际上,就是把 items 放到 lease 的 map 里,持久化? 并没有持久化这个关联关系,系统重启时从 boltdb keyBucketName 获取所有 key/value 时顺便恢复

续租

在 v3rpc 目录中,LeaseKeepAlive[7] 处理续租请求。最终调用 etcdSever 的 LeaseRenew

func (s *EtcdServer) LeaseRenew(ctx context.Context, id lease.LeaseID) (int64, error) {
 ttl, err := s.lessor.Renew(id)
 if err == nil { // already requested to primary lessor(leader)
  return ttl, nil
 }
 if err != lease.ErrNotPrimary {
  return -1, err
 }

 cctx, cancel := context.WithTimeout(ctx, s.Cfg.ReqTimeout())
 defer cancel()

 // renewals don't go through raft; forward to leader manually
 for cctx.Err() == nil && err != nil {
  leader, lerr := s.waitLeader(cctx)
  if lerr != nil {
   return -1, lerr
  }
  for _, url := range leader.PeerURLs {
   lurl := url + leasehttp.LeasePrefix
   ttl, err = leasehttp.RenewHTTP(cctx, id, lurl, s.peerRt)
   if err == nil || err == lease.ErrLeaseNotFound {
    return ttl, err
   }
  }
 }

 if cctx.Err() == context.DeadlineExceeded {
  return -1, ErrTimeout
 }
 return -1, ErrCanceled
}

这里面先调用 Renew 去续租本机的,然后判断当前是否有 leader, 如果没有的话,那么就 waitLeader 等待 leader 被选举出来,如果还是超时的话,那么报错直接返回 error.

// Renew renews an existing lease. If the given lease does not exist or
// has expired, an error will be returned.
func (le *lessor) Renew(id LeaseID) (int64, error) {
 le.mu.RLock()
 if !le.isPrimary() {
  // forward renew request to primary instead of returning error.
  le.mu.RUnlock()
  return -1, ErrNotPrimary
 }

 demotec := le.demotec

 l := le.leaseMap[id]
 if l == nil {
  le.mu.RUnlock()
  return -1, ErrLeaseNotFound
 }
 // Clear remaining TTL when we renew if it is set
 clearRemainingTTL := le.cp != nil && l.remainingTTL > 0

 le.mu.RUnlock()
 if l.expired() {
  select {
  // A expired lease might be pending for revoking or going through
  // quorum to be revoked. To be accurate, renew request must wait for the
  // deletion to complete.
  case <-l.revokec:
   return -1, ErrLeaseNotFound
  // The expired lease might fail to be revoked if the primary changes.
  // The caller will retry on ErrNotPrimary.
  case <-demotec:
   return -1, ErrNotPrimary
  case <-le.stopC:
   return -1, ErrNotPrimary
  }
 }

 // Clear remaining TTL when we renew if it is set
 // By applying a RAFT entry only when the remainingTTL is already set, we limit the number
 // of RAFT entries written per lease to a max of 2 per checkpoint interval.
 if clearRemainingTTL {
  le.cp(context.Background(), &pb.LeaseCheckpointRequest{Checkpoints: []*pb.LeaseCheckpoint{{ID: int64(l.ID), Remaining_TTL: 0}}})
 }

 le.mu.Lock()
 l.refresh(0)
 item := &LeaseWithTime{id: l.ID, time: l.expiry.UnixNano()}
 le.leaseExpiredNotifier.RegisterOrUpdate(item)
 le.mu.Unlock()

 leaseRenewed.Inc()
 return l.ttl, nil
}
  1. isPrimary 判断当前节点不是 leader, 那么返回错误 ErrNotPrimary
  2. 从 leaseMap 中获取 Lease 结构体,如果没有返回错误
  3. 如果设置了 checkpoint 回调,并且该 lease 有剩余 ttl, 那么设置 clearRemainingTTL
  4. 如果 expired 判断当前己经过期了,这时就不可以 renew 了,要等待删除操作完成后返回错误
  5. 操作 clearRemainingTTL 逻辑
  6. 重新生成 LeaseWithTime, 并更新 leaseExpiredNotifier 小顶堆

过期

当 lease 自动过期,会删除与之关联的所有 etcd key, 我们来看一下如何实现

func NewLessor(lg *zap.Logger, b backend.Backend, cfg LessorConfig) Lessor {
 return newLessor(lg, b, cfg)
}

这里面 newlessor 会调用 runLoop 开启一个异步 goroutine

func (le *lessor) runLoop() {
 defer close(le.doneC)

 for {
  le.revokeExpiredLeases()
  le.checkpointScheduledLeases()

  select {
  case <-time.After(500 * time.Millisecond):
  case <-le.stopC:
   return
  }
 }
}

也就是每隔 500ms, 调用一次 revokeExpiredLeases 函数回收过期 lease

// revokeExpiredLeases finds all leases past their expiry and sends them to expired channel for
// to be revoked.
func (le *lessor) revokeExpiredLeases() {
 var ls []*Lease

 // rate limit
 revokeLimit := leaseRevokeRate / 2

 le.mu.RLock()
 if le.isPrimary() {
  ls = le.findExpiredLeases(revokeLimit)
 }
 le.mu.RUnlock()

 if len(ls) != 0 {
  select {
  case <-le.stopC:
   return
  case le.expiredC <- ls:
  default:
   // the receiver of expiredC is probably busy handling
   // other stuff
   // let's try this next time after 500ms
  }
 }
}

只有当前节点是 isPrimary 时,才调用 findExpiredLeases 获取过期的 leases, 然后扔到 expiredC channel 里,供上层使用。我们来看一下 findExpiredLeases 实现

func (le *lessor) findExpiredLeases(limit int) []*Lease {
 leases := make([]*Lease, 0, 16)

 for {
  l, ok, next := le.expireExists()
  if !ok && !next {
   break
  }
  if !ok {
   continue
  }
  if next {
   continue
  }

  if l.expired() {
   leases = append(leases, l)

   // reach expired limit
   if len(leases) == limit {
    break
   }
  }
 }

 return leases
}

过期 lease 不能一直删除,要做限速,所以要判断 limit. 来看一下 expireExists

// expireExists returns true if expiry items exist.
// It pops only when expiry item exists.
// "next" is true, to indicate that it may exist in next attempt.
func (le *lessor) expireExists() (l *Lease, ok bool, next bool) {
 if le.leaseExpiredNotifier.Len() == 0 {
  return nil, false, false
 }

 item := le.leaseExpiredNotifier.Poll()
 l = le.leaseMap[item.id]
 if l == nil {
  // lease has expired or been revoked
  // no need to revoke (nothing is expiry)
  le.leaseExpiredNotifier.Unregister() // O(log N)
  return nil, false, true
 }
 now := time.Now()
 if now.UnixNano() < item.time /* expiration time */ {
  // Candidate expirations are caught up, reinsert this item
  // and no need to revoke (nothing is expiry)
  return l, false, false
 }

 // recheck if revoke is complete after retry interval
 item.time = now.Add(le.expiredLeaseRetryInterval).UnixNano()
 le.leaseExpiredNotifier.RegisterOrUpdate(item)
 return l, true, false
}

就是很简单的从堆里 Poll 小顶堆最小的数据,然后查看是否超时了,如果超时并没有立刻从堆中删除,需要设置了一个 expiredLeaseRetryInterval 的超时时间 3s, 为什么这么做呢???其实是配合 leaseMap 使用的,可以确保删除 lessor

回头再看一下上层如何使用 le.expiredC 处理过期数据,EtcdServer 启动时会有一个 run goroutine, 我们只看与过期相关的

func (s *EtcdServer) run() {
 lg := s.getLogger()
  ......
 var expiredLeaseC <-chan []*lease.Lease
 if s.lessor != nil {
  expiredLeaseC = s.lessor.ExpiredLeasesC()
 }

 for {
  select {
  ......
  case leases := <-expiredLeaseC:
   s.goAttach(func() {
    // Increases throughput of expired leases deletion process through parallelization
    c := make(chan struct{}, maxPendingRevokes)
    for _, lease := range leases {
     select {
     case c <- struct{}{}:
     case <-s.stopping:
      return
     }
     lid := lease.ID
     s.goAttach(func() {
      ctx := s.authStore.WithRoot(s.ctx)
      _, lerr := s.LeaseRevoke(ctx, &pb.LeaseRevokeRequest{ID: int64(lid)})
        ......
      <-c
     })
    }
   })
  ......
  }
 }
}

加了一层限速,然后并发调用 LeaseRevoke 处理过期, 这个实现很粗暴,如果不限制 goroutine 还真是满天飞

func (s *EtcdServer) LeaseRevoke(ctx context.Context, r *pb.LeaseRevokeRequest) (*pb.LeaseRevokeResponse, error) {
 resp, err := s.raftRequestOnce(ctx, pb.InternalRaftRequest{LeaseRevoke: r})
 if err != nil {
  return nil, err
 }
 return resp.(*pb.LeaseRevokeResponse), nil
}

过期删除操作是需要经过 raft 协义的,这块暂时不看,直接看后续的 applierv3 处理

func (a *applierV3backend) LeaseRevoke(lc *pb.LeaseRevokeRequest) (*pb.LeaseRevokeResponse, error) {
 err := a.s.lessor.Revoke(lease.LeaseID(lc.ID))
 return &pb.LeaseRevokeResponse{Header: newHeader(a.s)}, err
}
func (le *lessor) Revoke(id LeaseID) error {
 le.mu.Lock()

 l := le.leaseMap[id]
 if l == nil {
  le.mu.Unlock()
  return ErrLeaseNotFound
 }
 defer close(l.revokec)
 // unlock before doing external work
 le.mu.Unlock()

 if le.rd == nil {
  return nil
 }

 txn := le.rd()

 // sort keys so deletes are in same order among all members,
 // otherwise the backend hashes will be different
 keys := l.Keys()
 sort.StringSlice(keys).Sort()
 for _, key := range keys {
  txn.DeleteRange([]byte(key), nil)
 }

 le.mu.Lock()
 defer le.mu.Unlock()
 delete(le.leaseMap, l.ID)
 // lease deletion needs to be in the same backend transaction with the
 // kv deletion. Or we might end up with not executing the revoke or not
 // deleting the keys if etcdserver fails in between.
 le.b.BatchTx().UnsafeDelete(leaseBucketName, int64ToBytes(int64(l.ID)))

 txn.End()

 leaseRevoked.Inc()
 return nil
}

获取 txn, 开启一个事务,然后遍历关联的 keys, 一一删除,最后从 boltdb 的 leaseBucket 中删除该 lessor. 从 leaseMap 中删除,那么下次 expireExists 检查时会从小顶堆中删除 item

启动加载

当 etcd 重启时会自动加载 lease 并关联所有 keys, 一块是 lessor.initAndRecover, 一块是 EtcdServer.

func (le *lessor) initAndRecover() {
 tx := le.b.BatchTx()
 tx.Lock()

 tx.UnsafeCreateBucket(leaseBucketName)
 _, vs := tx.UnsafeRange(leaseBucketName, int64ToBytes(0), int64ToBytes(math.MaxInt64), 0)
 // TODO: copy vs and do decoding outside tx lock if lock contention becomes an issue.
 for i := range vs {
  var lpb leasepb.Lease
  err := lpb.Unmarshal(vs[i])
  if err != nil {
   tx.Unlock()
   panic("failed to unmarshal lease proto item")
  }
  ID := LeaseID(lpb.ID)
  if lpb.TTL < le.minLeaseTTL {
   lpb.TTL = le.minLeaseTTL
  }
  le.leaseMap[ID] = &Lease{
   ID:  ID,
   ttl: lpb.TTL,
   // itemSet will be filled in when recover key-value pairs
   // set expiry to forever, refresh when promoted
   itemSet: make(map[LeaseItem]struct{}),
   expiry:  forever,
   revokec: make(chan struct{}),
  }
 }
 le.leaseExpiredNotifier.Init()
 heap.Init(&le.leaseCheckpointHeap)
 tx.Unlock()

 le.b.ForceCommit()
}

比较简单,从 boltdb 的 leaseBucketName bucket 中遍历所有数据,然后加载到内存,并添加到 leaseExpiredNotifier 小顶堆中,与之关联的 keys 是一个空集合 itemSet,需要到 etcdServer.NewServer 初始化时加载

// NewServer creates a new EtcdServer from the supplied configuration. The
// configuration is considered static for the lifetime of the EtcdServer.
func NewServer(cfg ServerConfig) (srv *EtcdServer, err error) {
  ......
 srv.lessor = lease.NewLessor(
  srv.getLogger(),
  srv.be,
  lease.LessorConfig{
   MinLeaseTTL:                int64(math.Ceil(minTTL.Seconds())),
   CheckpointInterval:         cfg.LeaseCheckpointInterval,
   ExpiredLeasesRetryInterval: srv.Cfg.ReqTimeout(),
  })
 srv.kv = mvcc.New(srv.getLogger(), srv.be, srv.lessor, &srv.consistIndex, mvcc.StoreConfig{CompactionBatchLimit: cfg.CompactionBatchLimit})
  ......
}

再往里看就是 mvcc.New

func (s *store) restore() error {
......
 // index keys concurrently as they're loaded in from tx
 keysGauge.Set(0)
 rkvc, revc := restoreIntoIndex(s.lg, s.kvindex)
 for {
  keys, vals := tx.UnsafeRange(keyBucketName, min, max, int64(restoreChunkKeys))
  if len(keys) == 0 {
   break
  }
  // rkvc blocks if the total pending keys exceeds the restore
  // chunk size to keep keys from consuming too much memory.
  restoreChunk(s.lg, rkvc, keys, vals, keyToLease)
  if len(keys) < restoreChunkKeys {
   // partial set implies final set
   break
  }
  // next set begins after where this one ended
  newMin := bytesToRev(keys[len(keys)-1][:revBytesLen])
  newMin.sub++
  revToBytes(newMin, min)
 }
 close(rkvc)
 s.currentRev = <-revc

 // keys in the range [compacted revision -N, compaction] might all be deleted due to compaction.
 // the correct revision should be set to compaction revision in the case, not the largest revision
 // we have seen.
 if s.currentRev < s.compactMainRev {
  s.currentRev = s.compactMainRev
 }
 if scheduledCompact <= s.compactMainRev {
  scheduledCompact = 0
 }

 for key, lid := range keyToLease {
  if s.le == nil {
   tx.Unlock()
   panic("no lessor to attach lease")
  }
  err := s.le.Attach(lid, []lease.LeaseItem{{Key: key}})
  if err != nil {
   if s.lg != nil {
    s.lg.Warn(
     "failed to attach a lease",
     zap.String("lease-id", fmt.Sprintf("%016x", lid)),
     zap.Error(err),
    )
   } else {
    plog.Errorf("unexpected Attach error: %v", err)
   }
  }
 }
......
}

省略无关代码,其实就是先从 boltdb 的 keyBucketName bucket 遍历所有用户 keys 信息,找出带有租约的 keyToLease, 然后挨个调用 Attach 去关联

CheckPoint

关于什么是 CheckPoint 可以参考 pull9924[8] 的描术

The basic ideas is that for all leases with TTLs greater than 5 minutes, their remaining TTL will be checkpointed every 5 minutes so that if a new leader is elected, the leases are not auto-renewed to their full TTL, but instead only to the remaining TTL from the last checkpoint. A checkpoint is an entry that persisted to the RAFT consensus log that records the remainingTTL as determined by the leader when the checkpoint occurred.

从前面的代码可以看到,leader 负责租约过期,follower 从不主动失效。那么当主从切换时,follower 如何获取 remainingTTL 呢?换句话如何更新小顶堆呢?答案就是 CheckPoint, 由 CheckPoint 负责将快要过期的 lease 经 raft 协议同步到所有 follower. lessor.runLoop 启动 goroutine, 每 500ms 执行一次 checkpointScheduledLeases

// checkpointScheduledLeases finds all scheduled lease checkpoints that are due and
// submits them to the checkpointer to persist them to the consensus log.
func (le *lessor) checkpointScheduledLeases() {
 var cps []*pb.LeaseCheckpoint

 // rate limit
 for i := 0; i < leaseCheckpointRate/2; i++ {
  le.mu.Lock()
  if le.isPrimary() {
   cps = le.findDueScheduledCheckpoints(maxLeaseCheckpointBatchSize)
  }
  le.mu.Unlock()

  if len(cps) != 0 {
   le.cp(context.Background(), &pb.LeaseCheckpointRequest{Checkpoints: cps})
  }
  if len(cps) < maxLeaseCheckpointBatchSize {
   return
  }
 }
}

每隔 500ms, 从 leader 节点上调用 findDueScheduledCheckpoints 获取待 cp 的消息,然后调用 EtcdServer.raftRequestOnce 走 raft 协议将 pb.LeaseCheckpointRequest 广播出去。这个 le.cp 回调在 EtcdServer 初始化时设置

func (le *lessor) findDueScheduledCheckpoints(checkpointLimit int) []*pb.LeaseCheckpoint {
 if le.cp == nil {
  return nil
 }

 now := time.Now()
 cps := []*pb.LeaseCheckpoint{}
 for le.leaseCheckpointHeap.Len() > 0 && len(cps) < checkpointLimit {
  lt := le.leaseCheckpointHeap[0]
  if lt.time /* next checkpoint time */ > now.UnixNano() {
   return cps
  }
  heap.Pop(&le.leaseCheckpointHeap)
  var l *Lease
  var ok bool
  if l, ok = le.leaseMap[lt.id]; !ok {
   continue
  }
  // now 比 l.expiry 小说明过期了
  if !now.Before(l.expiry) {
   continue
  }
  remainingTTL := int64(math.Ceil(l.expiry.Sub(now).Seconds()))
  if remainingTTL >= l.ttl {
   continue
  }
  if le.lg != nil {
   le.lg.Debug("Checkpointing lease",
    zap.Int64("leaseID", int64(lt.id)),
    zap.Int64("remainingTTL", remainingTTL),
   )
  }
  cps = append(cps, &pb.LeaseCheckpoint{ID: int64(lt.id), Remaining_TTL: remainingTTL})
 }
 return cps
}
  1. findDueScheduledCheckpoints 操作也简单,从 le.leaseCheckpointHeap 小顶堆里获取 lease, 如果 next checkpoint time 还没到,那就返回
  2. 如果 remainingTTL 比设置的 ttl 大,那肯定无需同步到 follower, 否则的话扔到 cps 里返回出去

刚才提到的 raft 操作,最后由 follower 的 applierv3 调用 LeaseCheckpoint 执行,实际上还是 lessor.Checkpoint

func (a *applierV3backend) LeaseCheckpoint(lc *pb.LeaseCheckpointRequest) (*pb.LeaseCheckpointResponse, error) {
 for _, c := range lc.Checkpoints {
  err := a.s.lessor.Checkpoint(lease.LeaseID(c.ID), c.Remaining_TTL)
  if err != nil {
   return &pb.LeaseCheckpointResponse{Header: newHeader(a.s)}, err
  }
 }
 return &pb.LeaseCheckpointResponse{Header: newHeader(a.s)}, nil
}

我们直接看 lessor.Checkpoint 好了

func (le *lessor) Checkpoint(id LeaseID, remainingTTL int64) error {
 le.mu.Lock()
 defer le.mu.Unlock()

 if l, ok := le.leaseMap[id]; ok {
  // when checkpointing, we only update the remainingTTL, Promote is responsible for applying this to lease expiry
  l.remainingTTL = remainingTTL
  if le.isPrimary() {
   // schedule the next checkpoint as needed
   le.scheduleCheckpointIfNeeded(l)
  }
 }
 return nil
}

原来对于 follower 来说,就是更新 lease 的 remainingTTL, 并不更新小顶堆 leaseExpiredNotifier。同时如果是 Leader 的话调用 scheduleCheckpointIfNeeded 更新 checkpointHeap

Failover

func (r *raft) becomeLeader() {
 // TODO(xiangli) remove the panic when the raft implementation is stable
 if r.state == StateFollower {
  panic("invalid transition [follower -> leader]")
 }
  ......
 emptyEnt := pb.Entry{Data: nil}
 if !r.appendEntry(emptyEnt) {
  // This won't happen because we just called reset() above.
  r.logger.Panic("empty entry was dropped")
 }
  ......
}

切主后,leader 会广播一个空的 Entry, 然后当 raft apply 时处理

// applyEntryNormal apples an EntryNormal type raftpb request to the EtcdServer
func (s *EtcdServer) applyEntryNormal(e *raftpb.Entry) {
 shouldApplyV3 := false
 if e.Index > s.consistIndex.ConsistentIndex() {
  // set the consistent index of current executing entry
  s.consistIndex.setConsistentIndex(e.Index)
  shouldApplyV3 = true
 }

 // raft state machine may generate noop entry when leader confirmation.
 // skip it in advance to avoid some potential bug in the future
 if len(e.Data) == 0 {
  select {
  case s.forceVersionC <- struct{}{}:
  default:
  }
  // promote lessor when the local member is leader and finished
  // applying all entries from the last term.
  if s.isLeader() {
   s.lessor.Promote(s.Cfg.electionTimeout())
  }
  return
 }
  ......
}

applyEntryNormal 会调用 lessor.Promote 来处理租约

func (le *lessor) Promote(extend time.Duration) {
 le.mu.Lock()
 defer le.mu.Unlock()

 le.demotec = make(chan struct{})

 // refresh the expiries of all leases.
 for _, l := range le.leaseMap {
  l.refresh(extend)
  item := &LeaseWithTime{id: l.ID, time: l.expiry.UnixNano()}
  le.leaseExpiredNotifier.RegisterOrUpdate(item)
 }

 if len(le.leaseMap) < leaseRevokeRate {
  // no possibility of lease pile-up
  return
 }
  ......
}

遍历 leaseMap 数据,更新 remainingTTL, 然后更新到 leaseExpiredNotifier 堆中,这时另外一个 goroutine revokeExpiredLeases 也开始生效了

小结

这次分享就这些,以后面还会分享更多 etcd 与 raft 的内容。

参考资料

[1]官方 leases: https://etcd.io/docs/v3.4.0/dev-guide/interacting_v3/#grant-leases,

[2]lease 实现: https://github.com/etcd-io/etcd/tree/master/lease,

[3]v3rpc: https://github.com/etcd-io/etcd/blob/master/etcdserver/api/v3rpc/lease.go#L43,

[4]Lessor Grant: https://github.com/etcd-io/etcd/blob/master/lease/lessor.go#L261,

[5]applierv3 put: https://github.com/etcd-io/etcd/blob/master/etcdserver/apply.go#L207,

[6]storeTxnWrite.Put: https://github.com/etcd-io/etcd/blob/master/mvcc/kvstore_txn.go#L167,

[7]LeaseKeepAlive: https://github.com/etcd-io/etcd/blob/master/etcdserver/api/v3rpc/lease.go#L93,

[8]pull9924: https://github.com/etcd-io/etcd/pull/9924,

Copyright© 2013-2020

All Rights Reserved 京ICP备2023019179号-8