-
Notifications
You must be signed in to change notification settings - Fork 20
feat(CodecV7): add CodecV7 to support upgrade 5.2 Euclid phase2 #33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
47 commits
Select commit
Hold shift + click to select a range
286f209
add initial CodecV6 and daBatchV6
jonastheis 6767845
feat: add codecv5 and codecv6 for Euclid fork
omerfirmak cc9561b
implement blob encoding and decoding according to new blob layout
jonastheis 8c2a5cc
rename to CodecV7
jonastheis 9117170
add NewDABatchFromParams
jonastheis 4ef7bfc
add DecodeBlob to Codec
jonastheis bf16156
Update da.go
omerfirmak 2817674
Update interfaces.go
omerfirmak 7a60b34
Merge remote-tracking branch 'origin/omerfirmak/euclid' into feat/cod…
jonastheis 64133ef
fixes after merge
jonastheis 1dde89a
address review comments
jonastheis c9c1a44
add sanity checks for blob payload generation
jonastheis e980b3d
fix few small bugs uncovered by unit tests
jonastheis 0e930c6
upgrade to latest l2geth version and add correct getter for CodecV7 i…
jonastheis 5d200f3
fix linter warnings
jonastheis 5292e3c
add unit tests
jonastheis 3cfed43
go mod tidy
jonastheis eed341f
fix linter warnings
jonastheis be6b422
add function MessageQueueV2ApplyL1MessagesFromBlocks to compute the L…
jonastheis d77916b
fix lint and unit test errors
b71c047
call checkCompressedDataCompatibility only once -> constructBlobPaylo…
jonastheis cbed8b2
address review comments
jonastheis 392b6ff
update BlobEnvelopeV7 documentation
jonastheis edaf5d2
add CodecV7 to general util functions
jonastheis 894a93b
add InitialL1MessageQueueHash and LastL1MessageQueueHash to encoding.…
jonastheis f3271d9
Merge remote-tracking branch 'origin/main' into feat/codec-v6
jonastheis 2611ae1
go mod tidy
jonastheis 4d46aad
upgrade go-ethereum dependency to latest develop
jonastheis f4b274c
implement estimate functions
jonastheis 3c106a2
update TestMain and run go mod tidy
Thegaram 538036b
add NewDAChunk to CodecV7 for easier use in relayer
jonastheis 14d07e7
Merge branch 'feat/codec-v6' of github.com:scroll-tech/da-codec into …
jonastheis cfb316b
add daChunkV7 type to calculate chunk hash
jonastheis c6ae41e
allow batch.chunks but check consistency with batch.blocks
jonastheis d028c53
fix off-by-one error with L1 messages
jonastheis 8fa5e27
Fix: rolling hash implementation (#42)
roynalnaruto 4f13363
Apply suggestions from code review
jonastheis bcad556
rename initialL1MessageQueueHash -> prevL1MessageQueueHash and lastL1…
jonastheis 7522931
address review comments
jonastheis 32f5b49
address review comments
jonastheis 0247443
add challenge digest computation for batch
jonastheis 2043787
remove InitialL1MessageIndex from CodecV7
jonastheis de09af4
address review comments
jonastheis f9608ed
fix tests
jonastheis 01bd9b5
refactoring to minimize duplicate code and increase maintainability
jonastheis fca406c
fix nil pointer
jonastheis 5fd8356
address review comments
jonastheis File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,361 @@ | ||
| package encoding | ||
|
|
||
| import ( | ||
| "crypto/sha256" | ||
| "encoding/hex" | ||
| "encoding/json" | ||
| "errors" | ||
| "fmt" | ||
| "math" | ||
|
|
||
| "github.com/scroll-tech/go-ethereum/common" | ||
| "github.com/scroll-tech/go-ethereum/core/types" | ||
| "github.com/scroll-tech/go-ethereum/crypto/kzg4844" | ||
| "github.com/scroll-tech/go-ethereum/log" | ||
|
|
||
| "github.com/scroll-tech/da-codec/encoding/zstd" | ||
| ) | ||
|
|
||
| type DACodecV7 struct{} | ||
|
|
||
| // Version returns the codec version. | ||
| func (d *DACodecV7) Version() CodecVersion { | ||
| return CodecV7 | ||
| } | ||
|
|
||
| // MaxNumChunksPerBatch returns the maximum number of chunks per batch. | ||
| func (d *DACodecV7) MaxNumChunksPerBatch() int { | ||
| return math.MaxInt | ||
| } | ||
|
|
||
| // NewDABlock creates a new DABlock from the given Block and the total number of L1 messages popped before. | ||
| func (d *DACodecV7) NewDABlock(block *Block, totalL1MessagePoppedBefore uint64) (DABlock, error) { | ||
| return newDABlockV7FromBlockWithValidation(block, &totalL1MessagePoppedBefore) | ||
| } | ||
|
|
||
| // NewDAChunk creates a new DAChunk from the given Chunk and the total number of L1 messages popped before. | ||
| // Note: In DACodecV7 there is no notion of chunks. Blobs contain the entire batch data without any information of Chunks within. | ||
| // However, for compatibility reasons this function is implemented to create a DAChunk from a Chunk. | ||
| // This way we can still uniquely identify a set of blocks and their L1 messages. | ||
| func (d *DACodecV7) NewDAChunk(chunk *Chunk, totalL1MessagePoppedBefore uint64) (DAChunk, error) { | ||
| if chunk == nil { | ||
| return nil, errors.New("chunk is nil") | ||
| } | ||
|
|
||
| if len(chunk.Blocks) == 0 { | ||
| return nil, errors.New("number of blocks is 0") | ||
| } | ||
|
|
||
| if len(chunk.Blocks) > math.MaxUint16 { | ||
| return nil, fmt.Errorf("number of blocks (%d) exceeds maximum allowed (%d)", len(chunk.Blocks), math.MaxUint16) | ||
| } | ||
|
|
||
| blocks := make([]DABlock, 0, len(chunk.Blocks)) | ||
| txs := make([][]*types.TransactionData, 0, len(chunk.Blocks)) | ||
|
|
||
| if err := iterateAndVerifyBlocksAndL1Messages(chunk.PrevL1MessageQueueHash, chunk.PostL1MessageQueueHash, chunk.Blocks, &totalL1MessagePoppedBefore, func(initialBlockNumber uint64) {}, func(block *Block, daBlock *daBlockV7) error { | ||
| blocks = append(blocks, daBlock) | ||
| txs = append(txs, block.Transactions) | ||
|
|
||
| return nil | ||
| }); err != nil { | ||
| return nil, fmt.Errorf("failed to iterate and verify blocks and L1 messages: %w", err) | ||
| } | ||
|
|
||
| daChunk := newDAChunkV7( | ||
| blocks, | ||
| txs, | ||
| ) | ||
|
|
||
| return daChunk, nil | ||
| } | ||
|
|
||
| // NewDABatch creates a DABatch including blob from the provided Batch. | ||
| func (d *DACodecV7) NewDABatch(batch *Batch) (DABatch, error) { | ||
| if len(batch.Blocks) == 0 { | ||
| return nil, errors.New("batch must contain at least one block") | ||
| } | ||
|
|
||
| if err := checkBlocksBatchVSChunksConsistency(batch); err != nil { | ||
| return nil, fmt.Errorf("failed to check blocks batch vs chunks consistency: %w", err) | ||
| } | ||
|
|
||
| blob, blobVersionedHash, blobBytes, err := d.constructBlob(batch) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to construct blob: %w", err) | ||
| } | ||
|
|
||
| daBatch, err := newDABatchV7(CodecV7, batch.Index, blobVersionedHash, batch.ParentBatchHash, blob, blobBytes) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to construct DABatch: %w", err) | ||
| } | ||
|
|
||
| return daBatch, nil | ||
| } | ||
|
|
||
| func (d *DACodecV7) constructBlob(batch *Batch) (*kzg4844.Blob, common.Hash, []byte, error) { | ||
| blobBytes := make([]byte, blobEnvelopeV7OffsetPayload) | ||
|
|
||
| payloadBytes, err := d.constructBlobPayload(batch) | ||
| if err != nil { | ||
| return nil, common.Hash{}, nil, fmt.Errorf("failed to construct blob payload: %w", err) | ||
| } | ||
|
|
||
| compressedPayloadBytes, enableCompression, err := d.checkCompressedDataCompatibility(payloadBytes) | ||
| if err != nil { | ||
| return nil, common.Hash{}, nil, fmt.Errorf("failed to check batch compressed data compatibility: %w", err) | ||
| } | ||
|
|
||
| isCompressedFlag := uint8(0x0) | ||
| if enableCompression { | ||
| isCompressedFlag = 0x1 | ||
| payloadBytes = compressedPayloadBytes | ||
| } | ||
|
|
||
| sizeSlice := encodeSize3Bytes(uint32(len(payloadBytes))) | ||
|
|
||
| blobBytes[blobEnvelopeV7OffsetVersion] = uint8(CodecV7) | ||
| copy(blobBytes[blobEnvelopeV7OffsetByteSize:blobEnvelopeV7OffsetCompressedFlag], sizeSlice) | ||
| blobBytes[blobEnvelopeV7OffsetCompressedFlag] = isCompressedFlag | ||
| blobBytes = append(blobBytes, payloadBytes...) | ||
|
|
||
| if len(blobBytes) > maxEffectiveBlobBytes { | ||
| log.Error("ConstructBlob: Blob payload exceeds maximum size", "size", len(blobBytes), "blobBytes", hex.EncodeToString(blobBytes)) | ||
| return nil, common.Hash{}, nil, fmt.Errorf("blob exceeds maximum size: got %d, allowed %d", len(blobBytes), maxEffectiveBlobBytes) | ||
| } | ||
|
|
||
| // convert raw data to BLSFieldElements | ||
| blob, err := makeBlobCanonical(blobBytes) | ||
| if err != nil { | ||
| return nil, common.Hash{}, nil, fmt.Errorf("failed to convert blobBytes to canonical form: %w", err) | ||
| } | ||
|
|
||
| // compute blob versioned hash | ||
| c, err := kzg4844.BlobToCommitment(blob) | ||
| if err != nil { | ||
| return nil, common.Hash{}, nil, fmt.Errorf("failed to create blob commitment: %w", err) | ||
| } | ||
| blobVersionedHash := kzg4844.CalcBlobHashV1(sha256.New(), &c) | ||
|
|
||
| return blob, blobVersionedHash, blobBytes, nil | ||
| } | ||
jonastheis marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| func (d *DACodecV7) constructBlobPayload(batch *Batch) ([]byte, error) { | ||
| blobPayload := blobPayloadV7{ | ||
| prevL1MessageQueueHash: batch.PrevL1MessageQueueHash, | ||
| postL1MessageQueueHash: batch.PostL1MessageQueueHash, | ||
| blocks: batch.Blocks, | ||
| } | ||
|
|
||
| return blobPayload.Encode() | ||
| } | ||
|
|
||
| // NewDABatchFromBytes decodes the given byte slice into a DABatch. | ||
| // Note: This function only populates the batch header, it leaves the blob-related fields empty. | ||
| func (d *DACodecV7) NewDABatchFromBytes(data []byte) (DABatch, error) { | ||
| daBatch, err := decodeDABatchV7(data) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to decode DA batch: %w", err) | ||
| } | ||
|
|
||
| if daBatch.version != CodecV7 { | ||
| return nil, fmt.Errorf("codec version mismatch: expected %d but found %d", CodecV7, daBatch.version) | ||
| } | ||
|
|
||
| return daBatch, nil | ||
| } | ||
|
|
||
| func (d *DACodecV7) NewDABatchFromParams(batchIndex uint64, blobVersionedHash, parentBatchHash common.Hash) (DABatch, error) { | ||
| return newDABatchV7(CodecV7, batchIndex, blobVersionedHash, parentBatchHash, nil, nil) | ||
| } | ||
|
|
||
| func (d *DACodecV7) DecodeDAChunksRawTx(_ [][]byte) ([]*DAChunkRawTx, error) { | ||
| return nil, errors.New("DecodeDAChunksRawTx is not implemented for DACodecV7, use DecodeBlob instead") | ||
| } | ||
|
|
||
| func (d *DACodecV7) DecodeBlob(blob *kzg4844.Blob) (DABlobPayload, error) { | ||
| rawBytes := bytesFromBlobCanonical(blob) | ||
|
|
||
| // read the blob envelope header | ||
| version := rawBytes[blobEnvelopeV7OffsetVersion] | ||
| if CodecVersion(version) != CodecV7 { | ||
| return nil, fmt.Errorf("codec version mismatch: expected %d but found %d", CodecV7, version) | ||
| } | ||
|
|
||
| // read the data size | ||
| blobPayloadSize := decodeSize3Bytes(rawBytes[blobEnvelopeV7OffsetByteSize:blobEnvelopeV7OffsetCompressedFlag]) | ||
| if blobPayloadSize+blobEnvelopeV7OffsetPayload > uint32(len(rawBytes)) { | ||
| return nil, fmt.Errorf("blob envelope size exceeds the raw data size: %d > %d", blobPayloadSize, len(rawBytes)) | ||
| } | ||
|
|
||
| payloadBytes := rawBytes[blobEnvelopeV7OffsetPayload : blobEnvelopeV7OffsetPayload+blobPayloadSize] | ||
|
|
||
| // read the compressed flag and decompress if needed | ||
| compressed := rawBytes[blobEnvelopeV7OffsetCompressedFlag] | ||
| if compressed != 0x0 && compressed != 0x1 { | ||
| return nil, fmt.Errorf("invalid compressed flag: %d", compressed) | ||
| } | ||
| if compressed == 0x1 { | ||
| var err error | ||
| if payloadBytes, err = decompressV7Bytes(payloadBytes); err != nil { | ||
| return nil, fmt.Errorf("failed to decompress blob payload: %w", err) | ||
| } | ||
| } | ||
|
|
||
Thegaram marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| // read the payload | ||
| payload, err := decodeBlobPayloadV7(payloadBytes) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to decode blob payload: %w", err) | ||
| } | ||
|
|
||
| return payload, nil | ||
| } | ||
|
|
||
| func (d *DACodecV7) DecodeTxsFromBlob(blob *kzg4844.Blob, chunks []*DAChunkRawTx) error { | ||
| return nil | ||
| } | ||
|
|
||
| // checkCompressedDataCompatibility checks the compressed data compatibility for a batch. | ||
| // It constructs a blob payload, compresses the data, and checks the compressed data compatibility. | ||
| func (d *DACodecV7) checkCompressedDataCompatibility(payloadBytes []byte) ([]byte, bool, error) { | ||
| compressedPayloadBytes, err := zstd.CompressScrollBatchBytes(payloadBytes) | ||
| if err != nil { | ||
| return nil, false, fmt.Errorf("failed to compress blob payload: %w", err) | ||
| } | ||
|
|
||
| if err = checkCompressedDataCompatibility(compressedPayloadBytes); err != nil { | ||
| log.Warn("Compressed data compatibility check failed", "err", err, "payloadBytes", hex.EncodeToString(payloadBytes), "compressedPayloadBytes", hex.EncodeToString(compressedPayloadBytes)) | ||
| return nil, false, nil | ||
| } | ||
|
|
||
| // check if compressed data is bigger or equal to the original data -> no need to compress | ||
| if len(compressedPayloadBytes) >= len(payloadBytes) { | ||
| log.Warn("Compressed data is bigger or equal to the original data", "payloadBytes", hex.EncodeToString(payloadBytes), "compressedPayloadBytes", hex.EncodeToString(compressedPayloadBytes)) | ||
| return nil, false, nil | ||
| } | ||
|
|
||
| return compressedPayloadBytes, true, nil | ||
| } | ||
|
|
||
| // CheckChunkCompressedDataCompatibility checks the compressed data compatibility for a batch built from a single chunk. | ||
| // Note: For DACodecV7, this function is not implemented since there is no notion of DAChunk in this version. Blobs | ||
| // contain the entire batch data, and it is up to a prover to decide the chunk sizes. | ||
| func (d *DACodecV7) CheckChunkCompressedDataCompatibility(_ *Chunk) (bool, error) { | ||
| return true, nil | ||
| } | ||
|
|
||
| // CheckBatchCompressedDataCompatibility checks the compressed data compatibility for a batch. | ||
| func (d *DACodecV7) CheckBatchCompressedDataCompatibility(b *Batch) (bool, error) { | ||
| if len(b.Blocks) == 0 { | ||
| return false, errors.New("batch must contain at least one block") | ||
| } | ||
|
|
||
| if err := checkBlocksBatchVSChunksConsistency(b); err != nil { | ||
| return false, fmt.Errorf("failed to check blocks batch vs chunks consistency: %w", err) | ||
| } | ||
|
|
||
| payloadBytes, err := d.constructBlobPayload(b) | ||
| if err != nil { | ||
| return false, fmt.Errorf("failed to construct blob payload: %w", err) | ||
| } | ||
|
|
||
| _, compatible, err := d.checkCompressedDataCompatibility(payloadBytes) | ||
| if err != nil { | ||
| return false, fmt.Errorf("failed to check batch compressed data compatibility: %w", err) | ||
| } | ||
|
|
||
| return compatible, nil | ||
| } | ||
|
|
||
| func (d *DACodecV7) estimateL1CommitBatchSizeAndBlobSize(batch *Batch) (uint64, uint64, error) { | ||
| blobBytes := make([]byte, blobEnvelopeV7OffsetPayload) | ||
|
|
||
| payloadBytes, err := d.constructBlobPayload(batch) | ||
| if err != nil { | ||
| return 0, 0, fmt.Errorf("failed to construct blob payload: %w", err) | ||
| } | ||
|
|
||
| compressedPayloadBytes, enableCompression, err := d.checkCompressedDataCompatibility(payloadBytes) | ||
| if err != nil { | ||
| return 0, 0, fmt.Errorf("failed to check batch compressed data compatibility: %w", err) | ||
| } | ||
|
|
||
| if enableCompression { | ||
| blobBytes = append(blobBytes, compressedPayloadBytes...) | ||
| } else { | ||
| blobBytes = append(blobBytes, payloadBytes...) | ||
| } | ||
|
|
||
| return blobEnvelopeV7OffsetPayload + uint64(len(payloadBytes)), calculatePaddedBlobSize(uint64(len(blobBytes))), nil | ||
Thegaram marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| } | ||
|
|
||
| // EstimateChunkL1CommitBatchSizeAndBlobSize estimates the L1 commit batch size and blob size for a single chunk. | ||
| func (d *DACodecV7) EstimateChunkL1CommitBatchSizeAndBlobSize(chunk *Chunk) (uint64, uint64, error) { | ||
| return d.estimateL1CommitBatchSizeAndBlobSize(&Batch{ | ||
| Blocks: chunk.Blocks, | ||
| PrevL1MessageQueueHash: chunk.PrevL1MessageQueueHash, | ||
| PostL1MessageQueueHash: chunk.PostL1MessageQueueHash, | ||
| }) | ||
| } | ||
|
|
||
| // EstimateBatchL1CommitBatchSizeAndBlobSize estimates the L1 commit batch size and blob size for a batch. | ||
| func (d *DACodecV7) EstimateBatchL1CommitBatchSizeAndBlobSize(batch *Batch) (uint64, uint64, error) { | ||
| return d.estimateL1CommitBatchSizeAndBlobSize(batch) | ||
| } | ||
|
|
||
| // EstimateBlockL1CommitCalldataSize calculates the calldata size in l1 commit for this block approximately. | ||
| // Note: For CodecV7 calldata is constant independently of how many blocks or batches are submitted. | ||
| func (d *DACodecV7) EstimateBlockL1CommitCalldataSize(block *Block) (uint64, error) { | ||
| return 0, nil | ||
| } | ||
|
|
||
| // EstimateChunkL1CommitCalldataSize calculates the calldata size needed for committing a chunk to L1 approximately. | ||
| // Note: For CodecV7 calldata is constant independently of how many blocks or batches are submitted. There is no notion | ||
| // of chunks in this version. | ||
| func (d *DACodecV7) EstimateChunkL1CommitCalldataSize(chunk *Chunk) (uint64, error) { | ||
| return 0, nil | ||
| } | ||
|
|
||
| // EstimateBatchL1CommitCalldataSize calculates the calldata size in l1 commit for this batch approximately. | ||
| // Note: For CodecV7 calldata is constant independently of how many blocks or batches are submitted. | ||
| // Version + BatchHeader | ||
| func (d *DACodecV7) EstimateBatchL1CommitCalldataSize(batch *Batch) (uint64, error) { | ||
| return 1 + daBatchV7EncodedLength, nil | ||
| } | ||
|
|
||
| // EstimateChunkL1CommitGas calculates the total L1 commit gas for this chunk approximately. | ||
| // Note: For CodecV7 calldata is constant independently of how many blocks or batches are submitted. There is no notion | ||
| // of chunks in this version. | ||
| func (d *DACodecV7) EstimateChunkL1CommitGas(chunk *Chunk) (uint64, error) { | ||
| return 0, nil | ||
| } | ||
|
|
||
| // EstimateBatchL1CommitGas calculates the total L1 commit gas for this batch approximately. | ||
| func (d *DACodecV7) EstimateBatchL1CommitGas(batch *Batch) (uint64, error) { | ||
| // TODO: adjust this after contracts are implemented | ||
| var totalL1CommitGas uint64 | ||
|
|
||
| // Add extra gas costs | ||
Thegaram marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| totalL1CommitGas += extraGasCost // constant to account for ops like _getAdmin, _implementation, _requireNotPaused, etc | ||
| totalL1CommitGas += 4 * coldSloadGas // 4 one-time cold sload for commitBatch | ||
| totalL1CommitGas += sstoreGas // 1 time sstore | ||
| totalL1CommitGas += baseTxGas // base gas for tx | ||
| totalL1CommitGas += calldataNonZeroByteGas // version in calldata | ||
|
|
||
| return totalL1CommitGas, nil | ||
| } | ||
|
|
||
| // JSONFromBytes converts the bytes to a DABatch and then marshals it to JSON. | ||
| func (d *DACodecV7) JSONFromBytes(data []byte) ([]byte, error) { | ||
Thegaram marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| batch, err := d.NewDABatchFromBytes(data) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to decode DABatch from bytes: %w", err) | ||
| } | ||
|
|
||
| jsonBytes, err := json.Marshal(batch) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to marshal DABatch to JSON, version %d, hash %s: %w", batch.Version(), batch.Hash(), err) | ||
| } | ||
|
|
||
| return jsonBytes, nil | ||
| } | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.