go-ethereum源码分析 PartII 共识算法
首先从共识引擎-Engine开始记录
Engine是一个独立于具体算法的共识引擎接口
Author(header) (common.Address, error) 返回打包header对应的区块的矿工地址
VerifyHeader(chain ChainReader, header, seal bool) 验证header是否遵循当前Engine的共识原则。seal代表是否要顺便把VerifySeal做了
VerifyHeaders
VerifyUncles(chain ChainReader, block) error
VerifySeal(chain ChainReader, header) error
Prepare(chain, header) error 为header的共识fields的初始化做准备, The changes are executed incline.
Finalize
Seal(chain, block,results chan<- *types.Block, stop <-chan struct{}) 生成sealing request并且把结果加入channel中。注意,异步(the method returns immediately and will send the result async),根据共识算法不同,可能获得复数个blocks
SealHash(header) common.Hash 在sealed之前的块的Hash值
CalcDifficulty(chain, time uint64, parent *types.Header) *bigInt 难度调节算法,返回新块的难度
APIs(chain ChainReader)[]rpc.API 返回该共识引擎所提供的RPC APIs
WORD_BYTES = 4 # bytes in word DATASET_BYTES_INIT = 2**30 # bytes in dataset at genesis DATASET_BYTES_GROWTH = 2**23 # dataset growth per epoch CACHE_BYTES_INIT = 2**24 # bytes in cache at genesis CACHE_BYTES_GROWTH = 2**17 # cache growth per epoch CACHE_MULTIPLIER=1024 # Size of the DAG relative to the cache EPOCH_LENGTH = 30000 # blocks per epoch MIX_BYTES = 128 # width of mix HASH_BYTES = 64 # hash length in bytes DATASET_PARENTS = 256 # number of parents of each dataset element CACHE_ROUNDS = 3 # number of rounds in cache production ACCESSES = 64 # number of accesses in hashimoto loop
cache size和dataset size因为要随着时间增长,所以是CACHE_BYTES_INIT + CACHE_BYTES_GROWTH * (block_number // EPOCH_LENGTH)之下的最大质数
生成mkcache的seed
def get_seedhash(block): s = '\x00' * 32 for i in range(block.number // EPOCH_LENGTH): s = serialize_hash(sha3_256(s)) return s
生成cache的方法
注意ETH用的hash是sha3的变体,更接近keccak算法
def mkcache(cache_size, seed): n = cache_size // HASH_BYTES # Sequentially produce the initial dataset o = [sha3_512(seed)] for i in range(1, n): o.append(sha3_512(o[-1])) # Use a low-round version of randmemohash for _ in range(CACHE_ROUNDS): for i in range(n): v = o[i][0] % n o[i] = sha3_512(map(xor, o[(i-1+n) % n], o[v])) return o
生成dataset的方法
FNV_PRIME = 0x01000193 def fnv(v1, v2): return ((v1 * FNV_PRIME) ^ v2) % 2**32 def calc_dataset_item(cache, i): n = len(cache) r = HASH_BYTES // WORD_BYTES # initialize the mix mix = copy.copy(cache[i % n]) mix[0] ^= i mix = sha3_512(mix) # fnv it with a lot of random cache nodes based on i for j in range(DATASET_PARENTS): cache_index = fnv(i ^ j, mix[j % r]) mix = map(fnv, mix, cache[cache_index % n]) return sha3_512(mix)
def calc_dataset(full_size, cache):
return [calc_dataset_item(cache, i) for i in range(full_size // HASH_BYTES)]
算法主体
注意这里的s与seed不要混淆
def hashimoto(header, nonce, full_size, dataset_lookup): n = full_size / HASH_BYTES w = MIX_BYTES // WORD_BYTES mixhashes = MIX_BYTES / HASH_BYTES # combine header+nonce into a 64 byte seed s = sha3_512(header + nonce[::-1]) # start the mix with replicated s mix = [] for _ in range(MIX_BYTES / HASH_BYTES): mix.extend(s) # mix in random dataset nodes for i in range(ACCESSES): p = fnv(i ^ s[0], mix[i % w]) % (n // mixhashes) * mixhashes newdata = [] for j in range(MIX_BYTES / HASH_BYTES): newdata.extend(dataset_lookup(p + j)) mix = map(fnv, mix, newdata) # compress mix cmix = [] for i in range(0, len(mix), 4): cmix.append(fnv(fnv(fnv(mix[i], mix[i+1]), mix[i+2]), mix[i+3])) return { "mix digest": serialize_hash(cmix), "result": serialize_hash(sha3_256(s+cmix)) } def hashimoto_light(full_size, cache, header, nonce): return hashimoto(header, nonce, full_size, lambda x: calc_dataset_item(cache, x)) def hashimoto_full(full_size, dataset, header, nonce): return hashimoto(header, nonce, full_size, lambda x: dataset[x])
def mine(full_size, dataset, header, difficulty): # zero-pad target to compare with hash on the same digit when reversed target = zpad(encode_int(2**256 // difficulty), 64)[::-1] from random import randint nonce = randint(0, 2**64) while hashimoto_full(full_size, dataset, header, nonce) > target: nonce = (nonce + 1) % 2**64 return nonce
// Ethash is a consensus engine based on proof-of-work implementing the ethash // algorithm. type Ethash struct { config Config caches *lru // In memory caches to avoid regenerating too often datasets *lru // In memory datasets to avoid regenerating too often // Mining related fields rand *rand.Rand // Properly seeded random source for nonces threads int // Number of threads to mine on if mining update chan struct{} // Notification channel to update mining parameters hashrate metrics.Meter // Meter tracking the average hashrate // Remote sealer related fields workCh chan *sealTask // Notification channel to push new work and relative result channel to remote sealer fetchWorkCh chan *sealWork // Channel used for remote sealer to fetch mining work submitWorkCh chan *mineResult // Channel used for remote sealer to submit their mining result fetchRateCh chan chan uint64 // Channel used to gather submitted hash rate for local or remote sealer. submitRateCh chan *hashrate // Channel used for remote sealer to submit their mining hashrate // The fields below are hooks for testing shared *Ethash // Shared PoW verifier to avoid cache regeneration fakeFail uint64 // Block number which fails PoW check even in fake mode fakeDelay time.Duration // Time delay to sleep for before returning from verify lock sync.Mutex // Ensures thread safety for the in-memory caches and mining fields closeOnce sync.Once // Ensures exit channel will not be closed twice. exitCh chan chan error // Notification channel to exiting backend threads }
Ethash在ethash.go中有以下一些方法:
1. cache
2. dataset
这两个方法先在内存中找,接着去磁盘的DAG中找,最后创建对应数据结构
3. Hashrate
收集本机和网络peer的过去一分钟内search invocation的速率(单位每秒)
SUGAR:
Go non-blocking channel op
如果通道恰好能够收发,那么就对通道做对应操作
select { case msg := <-messages: fmt.Println("received message", msg) case sig := <-signals: fmt.Println("received signal", sig) default: fmt.Println("no activity") }