summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJohn Jannotti <john.jannotti@algorand.com>2022-01-17 22:55:41 -0500
committerGitHub <noreply@github.com>2022-01-17 22:55:41 -0500
commit4b1dfed080122c317df7e12e0d83c55871da7ff4 (patch)
tree3426f347e1702eb25cad77505436bc1b8b3ff934
parent4006ce219ebcebbb65c621b9f04362b45b256065 (diff)
Creating clean branch / PR for audit (#3432)feature/c2c
* The new inner appl fields * Unit tests for field setting on appls * Construct EvalDelta in AVM rather than by inspecting ledger * Obey the linter! * more LedgerForEvaluation accomodation * Test inner evaldeltas * Checks on calling old AVM apps, or re-entrancy * Allow opcode budget to be added to by executing inner apps. * TxID and GroupID for inner transactions * gitxn/gitxna * Lint, spec generate * txn simplifications * Encode "arrayness" in the txn field spec * Pavel's CR comments * Update tests to distinguish assembly / eval errors * Test itxn_field assembly separate from eval * factor out the array index parsing of all the txn assembly * Consistent errors and parsign for many opcodes * Cleanup immediate parsing, prep txn effects testing * EvalParams is now a single object used to evaluate each txn in turn. * Simplifications for the Dawg (the Review Dog) * Use a copy for the EvalParams.TxnGroup * Set the logicsig on txns in the GroupContext, so check() can see it * Update test for explicit empty check * Three new globals for to help contract-to-contract usability (#3237) * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * Gloadss (#3248) * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * opcode, docs, and tests * specs update * Feature/contract to contract (#3285) * Update the Version, BuildNumber, genesistimestamp.data * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * Support transaction arguments for `goal app method` (#3233) * Implement transactions as arguments * Fix indexing and dryrun issue * Add docstring * Satisfy review dog * Fix pointer issue * Fix group command * Rename e2e test * Fix filename variable * Add e2e test * Use tab * CI: use libboost-math-dev instead of libboost-all-dev (#3223) ## Summary Small change: libboost-math-dev requires just 4 packages to install, while libboost-all-dev requires > 100. Only Debian/Ubuntu distributions provide fine-grained boost packages like this, but should shave a little time off the CI builds. (Our only boost include is boost/math/distributions/binomial.hpp.) ## Test Plan Builds should pass as before. Now that we are no longer using Travis for Linux builds, the side effect of libboost-all-dev installing make and other missing build tools on Travis encountered in #2717 is no longer a concern. * testing: fixes to rest-participation-key e2e test (#3238) ## Summary - Test to make sure RES has the right input before counting line numbers for result size. - Rest RES to empty so that the same output is not recycled in case of an error. - exit 1 in case of an error - Reduce LAST_ROUND from 1200000 to 120 - "Get List of Keys" before getting NUM_IDS_3 otherwise it will recycle old RES value. * testing: interactive mode for e2e testing (#3227) ## Summary Some e2e tests require a python environment for testing. Unfortunately, setting up that environment adequately similar to the testing environment may not be trivial. This change introduces an interactive mode to the e2e.sh script which stops at the point of running the tests, and allows the user to run the tests from the same testing environment. ## Test Plan No tests needed. Tested the script locally. * Make dev-mode tests less flaky. (#3252) ## Summary Fix a couple flaws in the new go-e2e tests built ontop of DevMode: * Shutdown the fixture when finished. * Don't run in parallel. * Longer delays / better algorithms to wait for data flushing to complete. * Check for "out of order" keys. ## Test Plan N/A, this is a test. * adding libtool to ubuntu deps (#3251) ## Summary The sandbox is not building with dev config using master branch https://github.com/algorand/sandbox/issues/85, complains about libtool not being installed Guessing from this change https://github.com/algorand/go-algorand/pull/3223 Adding libtool to UBUNTU_DEPS in install scripts ## Test Plan Set config in sandbox to my branch `sandbox up dev` It built * Fix error shadowing in Eval (#3258) ## Summary Error from account preloading was shadowed by returning a wrong err variable. This caused subsequent problems in account updates and masked the original failure. ## Test Plan Use existing tests * Disable flaky test. (#3256) ## Summary This test doesn't work properly, disable it until #3255 addresses any underlying problems. * Update the Version, BuildNumber, genesistimestamp.data * Fix a data race in app tests (#3269) ## Summary A test helper function `commitRound` accessed `l.trackers.lastFlushTime` without taking a lock. Fixed. ## Test Plan ``` go test ./ledger -run TestAppEmpty -race -count=50 ok github.com/algorand/go-algorand/ledger 4.078s ``` * Fix e2e.sh mixed indent characters. (#3266) ## Summary Fix e2e.sh mixed indent characters. * Fix ParticipationKeyResponse type. (#3264) ## Summary Fix a small type discrepancy in the OpenAPI spec ahead of some other work that's about to happen. * disable parallelism for e2e-go tests (#3242) ## Summary This sets `-p 1` for the e2e-go tests, intended to make them more deterministic when running on a VM with relatively constrained resources. Since each e2e-go test might spin up a few nodes, it seems like it would help to avoid resource contention. ## Test Plan Tests should run as before. Desired effect can be verified by looking at the test output where the value of PARALLEL_FLAG is printed out before tests are run. * Updating Readme.md with circleci status badges (#3245) * Fix formatting for CircleCI badges (#3272) * Add Custom Scenario for Performance Testing (#3278) Add Custom Scenario for Performance Testing. Add README on how to run custom scenario and modify create_and_deploy_recipe.sh to accept a network template that will generate a new recipe. * Impose limits on the entire "tree" of inner calls. This also increases the realism of testing of multiple app calls in a group by creating the EvalParams with the real constructor, thus getting the pooling stuff tested here without playing games manipulating the ep after construction. * ParticipationRegistry - StateProof loading methods (#3261) ## Summary Add ParticipationRegistry methods for setting and retrieving state proof keys. Since they aren't in master yet there is a `type StateProofKey []byte` stub which will need to be updated later. ## Test Plan New unit tests. * Op base64 decode (#3220) b64 opcode, tests, and specs * Bump Version, Remove buildnumber.dat and genesistimestamp.dat files. * Change golang version to 1.16.11 in go-algorand (#2825) Upgrading to 1.16 to help alleviate issues with working on different go versions, and update to a supported, more secure version. Release notes for Go 1.15 and 1.16: https://tip.golang.org/doc/go1.16 https://tip.golang.org/doc/go1.15 * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * TestEcdsa: fix flaky "tampering" of public key (#3282) ## Summary This test (TestEcdsa) tests the ecdsa_pk_decompress opcode and intentionally "tampers" with the public key by setting the first byte to zero. Occasionally this test is failing, likely because the first byte was already zero. (The test failures are for the cases where failure is expected, `pass=false`) ## Test Plan Existing test should pass, occasional flakiness should go away. * Move appID tracking into EvalContext, out of LedgerForLogic This change increases the seperation between AVM execution and the ledger being used to lookup resources. Previously, the ledger kept track of the appID being executed, to offer a narrower interface to those resources. But now, with app-to-app calls, the appID being executed must change, and the AVM needs to maintain the current appID. * Stupid linter * Support reference types in `goal app method` (#3275) * Fix method signature parse bug * Support reference types * Review dog fixes * Fix comments * Add a hash prefix for ARCs-related hashes (#3298) ## Summary This is to allow ARCs (github.com/algorandfoundation/ARCs) to have their own hash prefix without risk of collision. ## Test Plan It is purely informational. There is no real code change. * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Bump buildnumber.dat * testing: disable flaky test (#3268) Disable a flaky test, to be re-enabled later with #3267. * enumerate conditions that might cause this fetchAndWrite to return false (#3301) ## Summary The fetchAndWrite function contains some complex logic to ultimately determine if we should continue trying to catch up. The conditions that might cause it to return false should be more explicitly enumerated. ## Test Plan Just comments * Fix unit tests error messages * make sure the block service is not attempting to access the ledger after being stopped. (#3303) ## Summary The block service was attempting to serve block via the http handler even after it has been stopped. This lead to undesired downstream failures in the ledger, which was shutdown as well. ## Test Plan unit test added. * Avoid creating algod process for the sole purpose of retrieving the genesis-id. (#3308) ## Summary Avoid creating algod process for the sole purpose of retrieving the genesis-id. Existing code was calling `algod -G -d <data dir>` in order to obtain the genesis version string. The genesis version string can be easily retrieved by loading the genesis file. ## Test Plan Use existing e2e tests. * documentation: fix algorand specs link (#3309) ## Summary This PR fixes a link in a README. ## Testing I clicked on the new link. * testing: reword partitiontest lint message. (#3297) ## Summary The wording on this was tripping me, maybe I was having an off day. I think it would be slightly easier if the message were to tell exactly what you need to do (and not use the angle brackets). * testing: fix random data race in TestAppAccountDataStorage (#3315) fix random data race in unit test * Allow access to resources created in the same transaction group The method will be reworked, but the tests are correct and want to get them visible to team. Co-authored-by: DevOps Service <devops-service@algorand.com> Co-authored-by: Jason Paulos <jasonpaulos@users.noreply.github.com> Co-authored-by: chris erway <51567+cce@users.noreply.github.com> Co-authored-by: Shant Karakashian <55754073+algonautshant@users.noreply.github.com> Co-authored-by: John Lee <64482439+algojohnlee@users.noreply.github.com> Co-authored-by: Will Winder <wwinder.unh@gmail.com> Co-authored-by: Ben Guidarelli <ben.guidarelli@gmail.com> Co-authored-by: Pavel Zbitskiy <65323360+algorandskiy@users.noreply.github.com> Co-authored-by: Jack <87339414+algojack@users.noreply.github.com> Co-authored-by: John Lee <john.lee@algorand.com> Co-authored-by: algobarb <78746954+algobarb@users.noreply.github.com> Co-authored-by: Zeph Grunschlag <tzaffi@users.noreply.github.com> Co-authored-by: Fabrice Benhamouda <fabrice.benhamouda@normalesup.org> Co-authored-by: Tsachi Herman <tsachi.herman@algorand.com> Co-authored-by: Tolik Zinovyev <tolik@algorand.com> Co-authored-by: egieseke <eric_gieseke@yahoo.com> * add access to resources created in the current group (#3340) * Feature/contract to contract (#3357) * Update the Version, BuildNumber, genesistimestamp.data * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * Support transaction arguments for `goal app method` (#3233) * Implement transactions as arguments * Fix indexing and dryrun issue * Add docstring * Satisfy review dog * Fix pointer issue * Fix group command * Rename e2e test * Fix filename variable * Add e2e test * Use tab * CI: use libboost-math-dev instead of libboost-all-dev (#3223) ## Summary Small change: libboost-math-dev requires just 4 packages to install, while libboost-all-dev requires > 100. Only Debian/Ubuntu distributions provide fine-grained boost packages like this, but should shave a little time off the CI builds. (Our only boost include is boost/math/distributions/binomial.hpp.) ## Test Plan Builds should pass as before. Now that we are no longer using Travis for Linux builds, the side effect of libboost-all-dev installing make and other missing build tools on Travis encountered in #2717 is no longer a concern. * testing: fixes to rest-participation-key e2e test (#3238) ## Summary - Test to make sure RES has the right input before counting line numbers for result size. - Rest RES to empty so that the same output is not recycled in case of an error. - exit 1 in case of an error - Reduce LAST_ROUND from 1200000 to 120 - "Get List of Keys" before getting NUM_IDS_3 otherwise it will recycle old RES value. * testing: interactive mode for e2e testing (#3227) ## Summary Some e2e tests require a python environment for testing. Unfortunately, setting up that environment adequately similar to the testing environment may not be trivial. This change introduces an interactive mode to the e2e.sh script which stops at the point of running the tests, and allows the user to run the tests from the same testing environment. ## Test Plan No tests needed. Tested the script locally. * Make dev-mode tests less flaky. (#3252) ## Summary Fix a couple flaws in the new go-e2e tests built ontop of DevMode: * Shutdown the fixture when finished. * Don't run in parallel. * Longer delays / better algorithms to wait for data flushing to complete. * Check for "out of order" keys. ## Test Plan N/A, this is a test. * adding libtool to ubuntu deps (#3251) ## Summary The sandbox is not building with dev config using master branch https://github.com/algorand/sandbox/issues/85, complains about libtool not being installed Guessing from this change https://github.com/algorand/go-algorand/pull/3223 Adding libtool to UBUNTU_DEPS in install scripts ## Test Plan Set config in sandbox to my branch `sandbox up dev` It built * Fix error shadowing in Eval (#3258) ## Summary Error from account preloading was shadowed by returning a wrong err variable. This caused subsequent problems in account updates and masked the original failure. ## Test Plan Use existing tests * Disable flaky test. (#3256) ## Summary This test doesn't work properly, disable it until #3255 addresses any underlying problems. * Update the Version, BuildNumber, genesistimestamp.data * Fix a data race in app tests (#3269) ## Summary A test helper function `commitRound` accessed `l.trackers.lastFlushTime` without taking a lock. Fixed. ## Test Plan ``` go test ./ledger -run TestAppEmpty -race -count=50 ok github.com/algorand/go-algorand/ledger 4.078s ``` * Fix e2e.sh mixed indent characters. (#3266) ## Summary Fix e2e.sh mixed indent characters. * Fix ParticipationKeyResponse type. (#3264) ## Summary Fix a small type discrepancy in the OpenAPI spec ahead of some other work that's about to happen. * disable parallelism for e2e-go tests (#3242) ## Summary This sets `-p 1` for the e2e-go tests, intended to make them more deterministic when running on a VM with relatively constrained resources. Since each e2e-go test might spin up a few nodes, it seems like it would help to avoid resource contention. ## Test Plan Tests should run as before. Desired effect can be verified by looking at the test output where the value of PARALLEL_FLAG is printed out before tests are run. * Updating Readme.md with circleci status badges (#3245) * Fix formatting for CircleCI badges (#3272) * Add Custom Scenario for Performance Testing (#3278) Add Custom Scenario for Performance Testing. Add README on how to run custom scenario and modify create_and_deploy_recipe.sh to accept a network template that will generate a new recipe. * Impose limits on the entire "tree" of inner calls. This also increases the realism of testing of multiple app calls in a group by creating the EvalParams with the real constructor, thus getting the pooling stuff tested here without playing games manipulating the ep after construction. * ParticipationRegistry - StateProof loading methods (#3261) ## Summary Add ParticipationRegistry methods for setting and retrieving state proof keys. Since they aren't in master yet there is a `type StateProofKey []byte` stub which will need to be updated later. ## Test Plan New unit tests. * Op base64 decode (#3220) b64 opcode, tests, and specs * Bump Version, Remove buildnumber.dat and genesistimestamp.dat files. * Change golang version to 1.16.11 in go-algorand (#2825) Upgrading to 1.16 to help alleviate issues with working on different go versions, and update to a supported, more secure version. Release notes for Go 1.15 and 1.16: https://tip.golang.org/doc/go1.16 https://tip.golang.org/doc/go1.15 * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * TestEcdsa: fix flaky "tampering" of public key (#3282) ## Summary This test (TestEcdsa) tests the ecdsa_pk_decompress opcode and intentionally "tampers" with the public key by setting the first byte to zero. Occasionally this test is failing, likely because the first byte was already zero. (The test failures are for the cases where failure is expected, `pass=false`) ## Test Plan Existing test should pass, occasional flakiness should go away. * Move appID tracking into EvalContext, out of LedgerForLogic This change increases the seperation between AVM execution and the ledger being used to lookup resources. Previously, the ledger kept track of the appID being executed, to offer a narrower interface to those resources. But now, with app-to-app calls, the appID being executed must change, and the AVM needs to maintain the current appID. * Stupid linter * Support reference types in `goal app method` (#3275) * Fix method signature parse bug * Support reference types * Review dog fixes * Fix comments * Add a hash prefix for ARCs-related hashes (#3298) ## Summary This is to allow ARCs (github.com/algorandfoundation/ARCs) to have their own hash prefix without risk of collision. ## Test Plan It is purely informational. There is no real code change. * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Bump buildnumber.dat * testing: disable flaky test (#3268) Disable a flaky test, to be re-enabled later with #3267. * enumerate conditions that might cause this fetchAndWrite to return false (#3301) ## Summary The fetchAndWrite function contains some complex logic to ultimately determine if we should continue trying to catch up. The conditions that might cause it to return false should be more explicitly enumerated. ## Test Plan Just comments * Fix unit tests error messages * make sure the block service is not attempting to access the ledger after being stopped. (#3303) ## Summary The block service was attempting to serve block via the http handler even after it has been stopped. This lead to undesired downstream failures in the ledger, which was shutdown as well. ## Test Plan unit test added. * Avoid creating algod process for the sole purpose of retrieving the genesis-id. (#3308) ## Summary Avoid creating algod process for the sole purpose of retrieving the genesis-id. Existing code was calling `algod -G -d <data dir>` in order to obtain the genesis version string. The genesis version string can be easily retrieved by loading the genesis file. ## Test Plan Use existing e2e tests. * documentation: fix algorand specs link (#3309) ## Summary This PR fixes a link in a README. ## Testing I clicked on the new link. * testing: reword partitiontest lint message. (#3297) ## Summary The wording on this was tripping me, maybe I was having an off day. I think it would be slightly easier if the message were to tell exactly what you need to do (and not use the angle brackets). * testing: fix random data race in TestAppAccountDataStorage (#3315) fix random data race in unit test * Allow access to resources created in the same transaction group The method will be reworked, but the tests are correct and want to get them visible to team. * ledger: perform the catchpoint writing outside the trackers lock. (#3311) ## Summary This PR moves the catchpoint file writing to be performed outside of the trackers lock. This resolves the issue where a long catchpoint file writing blocks the agreement from validating and propagating votes. ## Test Plan * [x] Test manually & use existing tests. * [x] Implement a unit test * [x] Deploy a local network where the catchpoint writing takes a long time and verify it doesn't get blocked during catchpoint writing. * Separate tx and key validity for `goal account renewpartkey` (#3286) Always use currentRound+proto.MaxTxnLife as last valid round for the transaction when renewing instead of using the partkey validity period. This fixes #3283 * Add qkniep to THANKS.md (#3320) ## Summary Add qkniep to THANKS.md * Followup to opcode base64_decode (#3288) * alphabet begone in favor of encoding * unit test various padding and whitespace scenarios * padding permutations also fail * "Slicing" --> "Manipulation" * fix the codegen fail? * Documenting padding, whitespace, other character behavior * Add help and fish mode to e2e interactive mode. (#3313) ## Summary Minor improvements to e2e.sh interactive mode: * add to -h output * do not run start stop test in interactive mode * support fish shell ## Test Plan Manual testing: ``` ~$ ./e2e.sh -i ... lots of output removed ... ********** READY ********** The test environment is now set. You can now run tests in another terminal. Configure the environment: set -g VIRTUAL_ENV "/home/will/go/src/github.com/algorand/go-algorand/tmp/out/e2e/130013-1639576513257/ve" set -g PATH "$VIRTUAL_ENV/bin:$PATH" python3 "/home/will/go/src/github.com/algorand/go-algorand/test/scripts"/e2e_client_runner.py "/home/will/go/src/github.com/algorand/go-algorand/test/scripts"/e2e_subs/SCRIPT_FILE_NAME Press enter to shut down the test environment... ``` * Minimum Account Balance in Algod (#3287) * Access to apps created in group Also adds some tests that are currently skipped for testing - access to addresses of newly created apps - use of gaid in inner transactions Both require some work to implement the thing being tested. * Remove tracked created mechanism in favor of examining applydata. * Add convertAddress tool. (#3304) ## Summary New tool: convertAddress I share this tool with someone every few months, putting it in the repo along with some documentation should make it easier to share and encourage people to share it amongst themselves if it's useful. Merge `debug` into `tools` to make it easier to organize these miscellaneous tools. * tealdbg: increase intermediate reading/writing buffers (#3335) ## Summary Some large teal source files cause the tealdbg/cdt session to choke. Upping the buffer size to allow for larger source files. closes #3100 ## Test Plan Run tealdbg with a large source teal file, ensure the source file makes it to cdt without choking. * Adding method pseudo op to readme (#3338) * Allow v6 AVM code to use in-group created asas, apps (& their accts) One exception - apps can not mutate (put or del) keys from the app accounts, because EvalDelta cannot encode such changes. * lint docs * typo * The review dog needs obedience training. * add config.DeadlockDetectionThreshold (#3339) Summary This allows for the deadlock detection threshold to be set by configuration. Test Plan Existing tests should pass. * Use one EvalParams for logic evals, another for apps in dry run We used to use one ep per transaction, shared between sig and and app. But the new model of ep usage is to keep using one while evaluating an entire group. The app ep is now built logic.NewAppEvalParams which, hopefully, will prevent some bugs when we change something in the EvalParams and don't reflect it in what was a "raw" EvalParams construction in debugger and dry run. * Use logic.NewAppEvalParams to decrease copying and bugs in debugger * Simplify use of NewEvalParams. No more nil return when no apps. This way, NewEvalParams can be used for all creations of EvalParams, whether they are intended for logicsig or app use, greatly simplifying the way we make them for use by dry run or debugger (where they serve double duty). * Remove explicit PastSideEffects handling in tealdbg * Fix flaky test in randomized ABI encoding test (#3346) * update abi encoding test random testcase generator, scale down parameters to avoid flaky test * parameterized test script * add notes to explain why flaky test is eliminated * show more information from self-roundtrip testing * fully utilize require, remove fmt * Always create EvalParams to evaluate a transaction group. We used to have an optimization to avoid creating EvalParams unless there was an app call in the transaction group. But the interface to allow transaction processing to communicate changes into the EvalParams is complicated by that (we must only do it if there is one!) This also allows us to use the same construction function for eps created for app and logic evaluation, simplifying dry-run and debugger. The optimization is less needed now anyway: 1) The ep is now shared for the whole group, so it's only one. 2) The ep is smaller now, as we only store nil pointers instead of larger scratch space objects for non-app calls. * Correct mistaken commit * Update `goal app method` handling of args and return values (#3352) * Fix method call arg overflow handling * Only check last log for return value * Address feedback * Add comment explaining ABI return prefix * Support app creation in `goal app method` (#3353) * Support app creation in `goal app method` * Don't use nonprintable tab character * Link to specific gist version * Fix error messages * Rename `methodCreation` to `methodCreatesApp` * Spec improvments * Update license to 2022 (#3360) Update license on all source files to 2022. * Add totals checks into acct updates tests (#3367) ## Summary After #2922 there is some leftover unused code for totals calculations. Turned this code into actual asserts. ## Test Plan This is tests update * More spec improvments, including resource "availability" * Recursively return inner transaction tree * Lint * No need for ConfirmedRound, so don't deref a nil pointer! * license check * Shut up, dawg. Co-authored-by: DevOps Service <devops-service@algorand.com> Co-authored-by: Jason Paulos <jasonpaulos@users.noreply.github.com> Co-authored-by: chris erway <51567+cce@users.noreply.github.com> Co-authored-by: Shant Karakashian <55754073+algonautshant@users.noreply.github.com> Co-authored-by: John Lee <64482439+algojohnlee@users.noreply.github.com> Co-authored-by: Will Winder <wwinder.unh@gmail.com> Co-authored-by: Ben Guidarelli <ben.guidarelli@gmail.com> Co-authored-by: Pavel Zbitskiy <65323360+algorandskiy@users.noreply.github.com> Co-authored-by: Jack <87339414+algojack@users.noreply.github.com> Co-authored-by: John Lee <john.lee@algorand.com> Co-authored-by: algobarb <78746954+algobarb@users.noreply.github.com> Co-authored-by: Zeph Grunschlag <tzaffi@users.noreply.github.com> Co-authored-by: Fabrice Benhamouda <fabrice.benhamouda@normalesup.org> Co-authored-by: Tsachi Herman <tsachi.herman@algorand.com> Co-authored-by: Tolik Zinovyev <tolik@algorand.com> Co-authored-by: egieseke <eric_gieseke@yahoo.com> Co-authored-by: Quentin Kniep <kniepque@hu-berlin.de> Co-authored-by: Hang Su <87964331+ahangsu@users.noreply.github.com> Co-authored-by: Or Aharonee <or.aharonee@algorand.com> * Feature/contract to contract (#3389) * Update the Version, BuildNumber, genesistimestamp.data * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * Support transaction arguments for `goal app method` (#3233) * Implement transactions as arguments * Fix indexing and dryrun issue * Add docstring * Satisfy review dog * Fix pointer issue * Fix group command * Rename e2e test * Fix filename variable * Add e2e test * Use tab * CI: use libboost-math-dev instead of libboost-all-dev (#3223) ## Summary Small change: libboost-math-dev requires just 4 packages to install, while libboost-all-dev requires > 100. Only Debian/Ubuntu distributions provide fine-grained boost packages like this, but should shave a little time off the CI builds. (Our only boost include is boost/math/distributions/binomial.hpp.) ## Test Plan Builds should pass as before. Now that we are no longer using Travis for Linux builds, the side effect of libboost-all-dev installing make and other missing build tools on Travis encountered in #2717 is no longer a concern. * testing: fixes to rest-participation-key e2e test (#3238) ## Summary - Test to make sure RES has the right input before counting line numbers for result size. - Rest RES to empty so that the same output is not recycled in case of an error. - exit 1 in case of an error - Reduce LAST_ROUND from 1200000 to 120 - "Get List of Keys" before getting NUM_IDS_3 otherwise it will recycle old RES value. * testing: interactive mode for e2e testing (#3227) ## Summary Some e2e tests require a python environment for testing. Unfortunately, setting up that environment adequately similar to the testing environment may not be trivial. This change introduces an interactive mode to the e2e.sh script which stops at the point of running the tests, and allows the user to run the tests from the same testing environment. ## Test Plan No tests needed. Tested the script locally. * Make dev-mode tests less flaky. (#3252) ## Summary Fix a couple flaws in the new go-e2e tests built ontop of DevMode: * Shutdown the fixture when finished. * Don't run in parallel. * Longer delays / better algorithms to wait for data flushing to complete. * Check for "out of order" keys. ## Test Plan N/A, this is a test. * adding libtool to ubuntu deps (#3251) ## Summary The sandbox is not building with dev config using master branch https://github.com/algorand/sandbox/issues/85, complains about libtool not being installed Guessing from this change https://github.com/algorand/go-algorand/pull/3223 Adding libtool to UBUNTU_DEPS in install scripts ## Test Plan Set config in sandbox to my branch `sandbox up dev` It built * Fix error shadowing in Eval (#3258) ## Summary Error from account preloading was shadowed by returning a wrong err variable. This caused subsequent problems in account updates and masked the original failure. ## Test Plan Use existing tests * Disable flaky test. (#3256) ## Summary This test doesn't work properly, disable it until #3255 addresses any underlying problems. * Update the Version, BuildNumber, genesistimestamp.data * Fix a data race in app tests (#3269) ## Summary A test helper function `commitRound` accessed `l.trackers.lastFlushTime` without taking a lock. Fixed. ## Test Plan ``` go test ./ledger -run TestAppEmpty -race -count=50 ok github.com/algorand/go-algorand/ledger 4.078s ``` * Fix e2e.sh mixed indent characters. (#3266) ## Summary Fix e2e.sh mixed indent characters. * Fix ParticipationKeyResponse type. (#3264) ## Summary Fix a small type discrepancy in the OpenAPI spec ahead of some other work that's about to happen. * disable parallelism for e2e-go tests (#3242) ## Summary This sets `-p 1` for the e2e-go tests, intended to make them more deterministic when running on a VM with relatively constrained resources. Since each e2e-go test might spin up a few nodes, it seems like it would help to avoid resource contention. ## Test Plan Tests should run as before. Desired effect can be verified by looking at the test output where the value of PARALLEL_FLAG is printed out before tests are run. * Updating Readme.md with circleci status badges (#3245) * Fix formatting for CircleCI badges (#3272) * Add Custom Scenario for Performance Testing (#3278) Add Custom Scenario for Performance Testing. Add README on how to run custom scenario and modify create_and_deploy_recipe.sh to accept a network template that will generate a new recipe. * Impose limits on the entire "tree" of inner calls. This also increases the realism of testing of multiple app calls in a group by creating the EvalParams with the real constructor, thus getting the pooling stuff tested here without playing games manipulating the ep after construction. * ParticipationRegistry - StateProof loading methods (#3261) ## Summary Add ParticipationRegistry methods for setting and retrieving state proof keys. Since they aren't in master yet there is a `type StateProofKey []byte` stub which will need to be updated later. ## Test Plan New unit tests. * Op base64 decode (#3220) b64 opcode, tests, and specs * Bump Version, Remove buildnumber.dat and genesistimestamp.dat files. * Change golang version to 1.16.11 in go-algorand (#2825) Upgrading to 1.16 to help alleviate issues with working on different go versions, and update to a supported, more secure version. Release notes for Go 1.15 and 1.16: https://tip.golang.org/doc/go1.16 https://tip.golang.org/doc/go1.15 * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * TestEcdsa: fix flaky "tampering" of public key (#3282) ## Summary This test (TestEcdsa) tests the ecdsa_pk_decompress opcode and intentionally "tampers" with the public key by setting the first byte to zero. Occasionally this test is failing, likely because the first byte was already zero. (The test failures are for the cases where failure is expected, `pass=false`) ## Test Plan Existing test should pass, occasional flakiness should go away. * Move appID tracking into EvalContext, out of LedgerForLogic This change increases the seperation between AVM execution and the ledger being used to lookup resources. Previously, the ledger kept track of the appID being executed, to offer a narrower interface to those resources. But now, with app-to-app calls, the appID being executed must change, and the AVM needs to maintain the current appID. * Stupid linter * Support reference types in `goal app method` (#3275) * Fix method signature parse bug * Support reference types * Review dog fixes * Fix comments * Add a hash prefix for ARCs-related hashes (#3298) ## Summary This is to allow ARCs (github.com/algorandfoundation/ARCs) to have their own hash prefix without risk of collision. ## Test Plan It is purely informational. There is no real code change. * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Bump buildnumber.dat * testing: disable flaky test (#3268) Disable a flaky test, to be re-enabled later with #3267. * enumerate conditions that might cause this fetchAndWrite to return false (#3301) ## Summary The fetchAndWrite function contains some complex logic to ultimately determine if we should continue trying to catch up. The conditions that might cause it to return false should be more explicitly enumerated. ## Test Plan Just comments * Fix unit tests error messages * make sure the block service is not attempting to access the ledger after being stopped. (#3303) ## Summary The block service was attempting to serve block via the http handler even after it has been stopped. This lead to undesired downstream failures in the ledger, which was shutdown as well. ## Test Plan unit test added. * Avoid creating algod process for the sole purpose of retrieving the genesis-id. (#3308) ## Summary Avoid creating algod process for the sole purpose of retrieving the genesis-id. Existing code was calling `algod -G -d <data dir>` in order to obtain the genesis version string. The genesis version string can be easily retrieved by loading the genesis file. ## Test Plan Use existing e2e tests. * documentation: fix algorand specs link (#3309) ## Summary This PR fixes a link in a README. ## Testing I clicked on the new link. * testing: reword partitiontest lint message. (#3297) ## Summary The wording on this was tripping me, maybe I was having an off day. I think it would be slightly easier if the message were to tell exactly what you need to do (and not use the angle brackets). * testing: fix random data race in TestAppAccountDataStorage (#3315) fix random data race in unit test * Allow access to resources created in the same transaction group The method will be reworked, but the tests are correct and want to get them visible to team. * ledger: perform the catchpoint writing outside the trackers lock. (#3311) ## Summary This PR moves the catchpoint file writing to be performed outside of the trackers lock. This resolves the issue where a long catchpoint file writing blocks the agreement from validating and propagating votes. ## Test Plan * [x] Test manually & use existing tests. * [x] Implement a unit test * [x] Deploy a local network where the catchpoint writing takes a long time and verify it doesn't get blocked during catchpoint writing. * Separate tx and key validity for `goal account renewpartkey` (#3286) Always use currentRound+proto.MaxTxnLife as last valid round for the transaction when renewing instead of using the partkey validity period. This fixes #3283 * Add qkniep to THANKS.md (#3320) ## Summary Add qkniep to THANKS.md * Followup to opcode base64_decode (#3288) * alphabet begone in favor of encoding * unit test various padding and whitespace scenarios * padding permutations also fail * "Slicing" --> "Manipulation" * fix the codegen fail? * Documenting padding, whitespace, other character behavior * Add help and fish mode to e2e interactive mode. (#3313) ## Summary Minor improvements to e2e.sh interactive mode: * add to -h output * do not run start stop test in interactive mode * support fish shell ## Test Plan Manual testing: ``` ~$ ./e2e.sh -i ... lots of output removed ... ********** READY ********** The test environment is now set. You can now run tests in another terminal. Configure the environment: set -g VIRTUAL_ENV "/home/will/go/src/github.com/algorand/go-algorand/tmp/out/e2e/130013-1639576513257/ve" set -g PATH "$VIRTUAL_ENV/bin:$PATH" python3 "/home/will/go/src/github.com/algorand/go-algorand/test/scripts"/e2e_client_runner.py "/home/will/go/src/github.com/algorand/go-algorand/test/scripts"/e2e_subs/SCRIPT_FILE_NAME Press enter to shut down the test environment... ``` * Minimum Account Balance in Algod (#3287) * Access to apps created in group Also adds some tests that are currently skipped for testing - access to addresses of newly created apps - use of gaid in inner transactions Both require some work to implement the thing being tested. * Remove tracked created mechanism in favor of examining applydata. * Add convertAddress tool. (#3304) ## Summary New tool: convertAddress I share this tool with someone every few months, putting it in the repo along with some documentation should make it easier to share and encourage people to share it amongst themselves if it's useful. Merge `debug` into `tools` to make it easier to organize these miscellaneous tools. * tealdbg: increase intermediate reading/writing buffers (#3335) ## Summary Some large teal source files cause the tealdbg/cdt session to choke. Upping the buffer size to allow for larger source files. closes #3100 ## Test Plan Run tealdbg with a large source teal file, ensure the source file makes it to cdt without choking. * Adding method pseudo op to readme (#3338) * Allow v6 AVM code to use in-group created asas, apps (& their accts) One exception - apps can not mutate (put or del) keys from the app accounts, because EvalDelta cannot encode such changes. * lint docs * typo * The review dog needs obedience training. * add config.DeadlockDetectionThreshold (#3339) Summary This allows for the deadlock detection threshold to be set by configuration. Test Plan Existing tests should pass. * Use one EvalParams for logic evals, another for apps in dry run We used to use one ep per transaction, shared between sig and and app. But the new model of ep usage is to keep using one while evaluating an entire group. The app ep is now built logic.NewAppEvalParams which, hopefully, will prevent some bugs when we change something in the EvalParams and don't reflect it in what was a "raw" EvalParams construction in debugger and dry run. * Use logic.NewAppEvalParams to decrease copying and bugs in debugger * Simplify use of NewEvalParams. No more nil return when no apps. This way, NewEvalParams can be used for all creations of EvalParams, whether they are intended for logicsig or app use, greatly simplifying the way we make them for use by dry run or debugger (where they serve double duty). * Remove explicit PastSideEffects handling in tealdbg * Fix flaky test in randomized ABI encoding test (#3346) * update abi encoding test random testcase generator, scale down parameters to avoid flaky test * parameterized test script * add notes to explain why flaky test is eliminated * show more information from self-roundtrip testing * fully utilize require, remove fmt * Always create EvalParams to evaluate a transaction group. We used to have an optimization to avoid creating EvalParams unless there was an app call in the transaction group. But the interface to allow transaction processing to communicate changes into the EvalParams is complicated by that (we must only do it if there is one!) This also allows us to use the same construction function for eps created for app and logic evaluation, simplifying dry-run and debugger. The optimization is less needed now anyway: 1) The ep is now shared for the whole group, so it's only one. 2) The ep is smaller now, as we only store nil pointers instead of larger scratch space objects for non-app calls. * Correct mistaken commit * ledger: perform the catchpoint writing outside the trackers lock. (#3311) ## Summary This PR moves the catchpoint file writing to be performed outside of the trackers lock. This resolves the issue where a long catchpoint file writing blocks the agreement from validating and propagating votes. ## Test Plan * [x] Test manually & use existing tests. * [x] Implement a unit test * [x] Deploy a local network where the catchpoint writing takes a long time and verify it doesn't get blocked during catchpoint writing. * Bump version number * Update `goal app method` handling of args and return values (#3352) * Fix method call arg overflow handling * Only check last log for return value * Address feedback * Add comment explaining ABI return prefix * Support app creation in `goal app method` (#3353) * Support app creation in `goal app method` * Don't use nonprintable tab character * Link to specific gist version * Fix error messages * Rename `methodCreation` to `methodCreatesApp` * Spec improvments * Update license to 2022 (#3360) Update license on all source files to 2022. * Add totals checks into acct updates tests (#3367) ## Summary After #2922 there is some leftover unused code for totals calculations. Turned this code into actual asserts. ## Test Plan This is tests update * More spec improvments, including resource "availability" * Recursively return inner transaction tree * Lint * No need for ConfirmedRound, so don't deref a nil pointer! * remove buildnumber.dat * license check * Shut up, dawg. * PKI State Proof Incremental Key Loading (#3281) ## Summary Followup to #3261 (contained in diff). Use the new key loading routine from the REST API. ## Test Plan New unit tests. * Limit number of simultaneous REST connections (#3326) ## Summary This PR limits the number of simultaneous REST connections we process to prevent the exhaustion of resources and ultimately a crash. Two limits are introduced: soft and hard. When the soft limit is exceeded, new connections are returned the 429 Too Many Requests http code. When the hard limit is exceeded, new connections are accepted and immediately closed. Partially resolves https://github.com/algorand/go-algorand-internal/issues/1814. ## Test Plan Added unit tests. * Use rejecting limit listener in WebsocketNetwork. (#3380) ## Summary Replace the standard limit listener with the new rejecting limit listener in `WebsocketNetwork`. This will let the dialing node know that connection is impossible faster. ## Test Plan Probably not necessary. * Delete unused constant. (#3379) ## Summary This PR deletes an unused constant. ## Test Plan None. * Add test to exercise lookup corner cases (#3376) ## Summary This test attempts to cover the case when an accountUpdates.lookupX method can't find the requested address, falls through looking at deltas and the LRU accounts cache, then hits the database — only to discover that the round stored in the database (committed in `accountUpdates.commitRound`) is out of sync with `accountUpdates.cachedDBRound` (updated a little bit later in `accountUpdates.postCommit`). In this case, the lookup method waits and tries again, iterating the `for { }` it is in. We did not have coverage for this code path before. ## Test Plan Adds new test. * Test for catchup stop on completion (#3306) Adding a test for the fix in #3299 ## Test Plan This is a test * Delete unused AtomicCommitWriteLock(). (#3383) ## Summary This PR deletes unused `AtomicCommitWriteLock()` and simplifies code. ## Test Plan None. Co-authored-by: DevOps Service <devops-service@algorand.com> Co-authored-by: Jason Paulos <jasonpaulos@users.noreply.github.com> Co-authored-by: chris erway <51567+cce@users.noreply.github.com> Co-authored-by: Shant Karakashian <55754073+algonautshant@users.noreply.github.com> Co-authored-by: John Lee <64482439+algojohnlee@users.noreply.github.com> Co-authored-by: Will Winder <wwinder.unh@gmail.com> Co-authored-by: Ben Guidarelli <ben.guidarelli@gmail.com> Co-authored-by: Pavel Zbitskiy <65323360+algorandskiy@users.noreply.github.com> Co-authored-by: Jack <87339414+algojack@users.noreply.github.com> Co-authored-by: John Lee <john.lee@algorand.com> Co-authored-by: algobarb <78746954+algobarb@users.noreply.github.com> Co-authored-by: Zeph Grunschlag <tzaffi@users.noreply.github.com> Co-authored-by: Fabrice Benhamouda <fabrice.benhamouda@normalesup.org> Co-authored-by: Tsachi Herman <tsachi.herman@algorand.com> Co-authored-by: Tolik Zinovyev <tolik@algorand.com> Co-authored-by: egieseke <eric_gieseke@yahoo.com> Co-authored-by: Quentin Kniep <kniepque@hu-berlin.de> Co-authored-by: Hang Su <87964331+ahangsu@users.noreply.github.com> Co-authored-by: Or Aharonee <or.aharonee@algorand.com> Co-authored-by: Barbara Poon <barbara.poon@algorand.com> * Feature/contract to contract (#3390) * Update the Version, BuildNumber, genesistimestamp.data * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * Support transaction arguments for `goal app method` (#3233) * Implement transactions as arguments * Fix indexing and dryrun issue * Add docstring * Satisfy review dog * Fix pointer issue * Fix group command * Rename e2e test * Fix filename variable * Add e2e test * Use tab * CI: use libboost-math-dev instead of libboost-all-dev (#3223) ## Summary Small change: libboost-math-dev requires just 4 packages to install, while libboost-all-dev requires > 100. Only Debian/Ubuntu distributions provide fine-grained boost packages like this, but should shave a little time off the CI builds. (Our only boost include is boost/math/distributions/binomial.hpp.) ## Test Plan Builds should pass as before. Now that we are no longer using Travis for Linux builds, the side effect of libboost-all-dev installing make and other missing build tools on Travis encountered in #2717 is no longer a concern. * testing: fixes to rest-participation-key e2e test (#3238) ## Summary - Test to make sure RES has the right input before counting line numbers for result size. - Rest RES to empty so that the same output is not recycled in case of an error. - exit 1 in case of an error - Reduce LAST_ROUND from 1200000 to 120 - "Get List of Keys" before getting NUM_IDS_3 otherwise it will recycle old RES value. * testing: interactive mode for e2e testing (#3227) ## Summary Some e2e tests require a python environment for testing. Unfortunately, setting up that environment adequately similar to the testing environment may not be trivial. This change introduces an interactive mode to the e2e.sh script which stops at the point of running the tests, and allows the user to run the tests from the same testing environment. ## Test Plan No tests needed. Tested the script locally. * Make dev-mode tests less flaky. (#3252) ## Summary Fix a couple flaws in the new go-e2e tests built ontop of DevMode: * Shutdown the fixture when finished. * Don't run in parallel. * Longer delays / better algorithms to wait for data flushing to complete. * Check for "out of order" keys. ## Test Plan N/A, this is a test. * adding libtool to ubuntu deps (#3251) ## Summary The sandbox is not building with dev config using master branch https://github.com/algorand/sandbox/issues/85, complains about libtool not being installed Guessing from this change https://github.com/algorand/go-algorand/pull/3223 Adding libtool to UBUNTU_DEPS in install scripts ## Test Plan Set config in sandbox to my branch `sandbox up dev` It built * Fix error shadowing in Eval (#3258) ## Summary Error from account preloading was shadowed by returning a wrong err variable. This caused subsequent problems in account updates and masked the original failure. ## Test Plan Use existing tests * Disable flaky test. (#3256) ## Summary This test doesn't work properly, disable it until #3255 addresses any underlying problems. * Update the Version, BuildNumber, genesistimestamp.data * Fix a data race in app tests (#3269) ## Summary A test helper function `commitRound` accessed `l.trackers.lastFlushTime` without taking a lock. Fixed. ## Test Plan ``` go test ./ledger -run TestAppEmpty -race -count=50 ok github.com/algorand/go-algorand/ledger 4.078s ``` * Fix e2e.sh mixed indent characters. (#3266) ## Summary Fix e2e.sh mixed indent characters. * Fix ParticipationKeyResponse type. (#3264) ## Summary Fix a small type discrepancy in the OpenAPI spec ahead of some other work that's about to happen. * disable parallelism for e2e-go tests (#3242) ## Summary This sets `-p 1` for the e2e-go tests, intended to make them more deterministic when running on a VM with relatively constrained resources. Since each e2e-go test might spin up a few nodes, it seems like it would help to avoid resource contention. ## Test Plan Tests should run as before. Desired effect can be verified by looking at the test output where the value of PARALLEL_FLAG is printed out before tests are run. * Updating Readme.md with circleci status badges (#3245) * Fix formatting for CircleCI badges (#3272) * Add Custom Scenario for Performance Testing (#3278) Add Custom Scenario for Performance Testing. Add README on how to run custom scenario and modify create_and_deploy_recipe.sh to accept a network template that will generate a new recipe. * Impose limits on the entire "tree" of inner calls. This also increases the realism of testing of multiple app calls in a group by creating the EvalParams with the real constructor, thus getting the pooling stuff tested here without playing games manipulating the ep after construction. * ParticipationRegistry - StateProof loading methods (#3261) ## Summary Add ParticipationRegistry methods for setting and retrieving state proof keys. Since they aren't in master yet there is a `type StateProofKey []byte` stub which will need to be updated later. ## Test Plan New unit tests. * Op base64 decode (#3220) b64 opcode, tests, and specs * Bump Version, Remove buildnumber.dat and genesistimestamp.dat files. * Change golang version to 1.16.11 in go-algorand (#2825) Upgrading to 1.16 to help alleviate issues with working on different go versions, and update to a supported, more secure version. Release notes for Go 1.15 and 1.16: https://tip.golang.org/doc/go1.16 https://tip.golang.org/doc/go1.15 * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * TestEcdsa: fix flaky "tampering" of public key (#3282) ## Summary This test (TestEcdsa) tests the ecdsa_pk_decompress opcode and intentionally "tampers" with the public key by setting the first byte to zero. Occasionally this test is failing, likely because the first byte was already zero. (The test failures are for the cases where failure is expected, `pass=false`) ## Test Plan Existing test should pass, occasional flakiness should go away. * Move appID tracking into EvalContext, out of LedgerForLogic This change increases the seperation between AVM execution and the ledger being used to lookup resources. Previously, the ledger kept track of the appID being executed, to offer a narrower interface to those resources. But now, with app-to-app calls, the appID being executed must change, and the AVM needs to maintain the current appID. * Stupid linter * Support reference types in `goal app method` (#3275) * Fix method signature parse bug * Support reference types * Review dog fixes * Fix comments * Add a hash prefix for ARCs-related hashes (#3298) ## Summary This is to allow ARCs (github.com/algorandfoundation/ARCs) to have their own hash prefix without risk of collision. ## Test Plan It is purely informational. There is no real code change. * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Bump buildnumber.dat * testing: disable flaky test (#3268) Disable a flaky test, to be re-enabled later with #3267. * enumerate conditions that might cause this fetchAndWrite to return false (#3301) ## Summary The fetchAndWrite function contains some complex logic to ultimately determine if we should continue trying to catch up. The conditions that might cause it to return false should be more explicitly enumerated. ## Test Plan Just comments * Fix unit tests error messages * make sure the block service is not attempting to access the ledger after being stopped. (#3303) ## Summary The block service was attempting to serve block via the http handler even after it has been stopped. This lead to undesired downstream failures in the ledger, which was shutdown as well. ## Test Plan unit test added. * Avoid creating algod process for the sole purpose of retrieving the genesis-id. (#3308) ## Summary Avoid creating algod process for the sole purpose of retrieving the genesis-id. Existing code was calling `algod -G -d <data dir>` in order to obtain the genesis version string. The genesis version string can be easily retrieved by loading the genesis file. ## Test Plan Use existing e2e tests. * documentation: fix algorand specs link (#3309) ## Summary This PR fixes a link in a README. ## Testing I clicked on the new link. * testing: reword partitiontest lint message. (#3297) ## Summary The wording on this was tripping me, maybe I was having an off day. I think it would be slightly easier if the message were to tell exactly what you need to do (and not use the angle brackets). * testing: fix random data race in TestAppAccountDataStorage (#3315) fix random data race in unit test * Allow access to resources created in the same transaction group The method will be reworked, but the tests are correct and want to get them visible to team. * ledger: perform the catchpoint writing outside the trackers lock. (#3311) ## Summary This PR moves the catchpoint file writing to be performed outside of the trackers lock. This resolves the issue where a long catchpoint file writing blocks the agreement from validating and propagating votes. ## Test Plan * [x] Test manually & use existing tests. * [x] Implement a unit test * [x] Deploy a local network where the catchpoint writing takes a long time and verify it doesn't get blocked during catchpoint writing. * Separate tx and key validity for `goal account renewpartkey` (#3286) Always use currentRound+proto.MaxTxnLife as last valid round for the transaction when renewing instead of using the partkey validity period. This fixes #3283 * Add qkniep to THANKS.md (#3320) ## Summary Add qkniep to THANKS.md * Followup to opcode base64_decode (#3288) * alphabet begone in favor of encoding * unit test various padding and whitespace scenarios * padding permutations also fail * "Slicing" --> "Manipulation" * fix the codegen fail? * Documenting padding, whitespace, other character behavior * Add help and fish mode to e2e interactive mode. (#3313) ## Summary Minor improvements to e2e.sh interactive mode: * add to -h output * do not run start stop test in interactive mode * support fish shell ## Test Plan Manual testing: ``` ~$ ./e2e.sh -i ... lots of output removed ... ********** READY ********** The test environment is now set. You can now run tests in another terminal. Configure the environment: set -g VIRTUAL_ENV "/home/will/go/src/github.com/algorand/go-algorand/tmp/out/e2e/130013-1639576513257/ve" set -g PATH "$VIRTUAL_ENV/bin:$PATH" python3 "/home/will/go/src/github.com/algorand/go-algorand/test/scripts"/e2e_client_runner.py "/home/will/go/src/github.com/algorand/go-algorand/test/scripts"/e2e_subs/SCRIPT_FILE_NAME Press enter to shut down the test environment... ``` * Minimum Account Balance in Algod (#3287) * Access to apps created in group Also adds some tests that are currently skipped for testing - access to addresses of newly created apps - use of gaid in inner transactions Both require some work to implement the thing being tested. * Remove tracked created mechanism in favor of examining applydata. * Add convertAddress tool. (#3304) ## Summary New tool: convertAddress I share this tool with someone every few months, putting it in the repo along with some documentation should make it easier to share and encourage people to share it amongst themselves if it's useful. Merge `debug` into `tools` to make it easier to organize these miscellaneous tools. * tealdbg: increase intermediate reading/writing buffers (#3335) ## Summary Some large teal source files cause the tealdbg/cdt session to choke. Upping the buffer size to allow for larger source files. closes #3100 ## Test Plan Run tealdbg with a large source teal file, ensure the source file makes it to cdt without choking. * Adding method pseudo op to readme (#3338) * Allow v6 AVM code to use in-group created asas, apps (& their accts) One exception - apps can not mutate (put or del) keys from the app accounts, because EvalDelta cannot encode such changes. * lint docs * typo * The review dog needs obedience training. * add config.DeadlockDetectionThreshold (#3339) Summary This allows for the deadlock detection threshold to be set by configuration. Test Plan Existing tests should pass. * Use one EvalParams for logic evals, another for apps in dry run We used to use one ep per transaction, shared between sig and and app. But the new model of ep usage is to keep using one while evaluating an entire group. The app ep is now built logic.NewAppEvalParams which, hopefully, will prevent some bugs when we change something in the EvalParams and don't reflect it in what was a "raw" EvalParams construction in debugger and dry run. * Use logic.NewAppEvalParams to decrease copying and bugs in debugger * Simplify use of NewEvalParams. No more nil return when no apps. This way, NewEvalParams can be used for all creations of EvalParams, whether they are intended for logicsig or app use, greatly simplifying the way we make them for use by dry run or debugger (where they serve double duty). * Remove explicit PastSideEffects handling in tealdbg * Fix flaky test in randomized ABI encoding test (#3346) * update abi encoding test random testcase generator, scale down parameters to avoid flaky test * parameterized test script * add notes to explain why flaky test is eliminated * show more information from self-roundtrip testing * fully utilize require, remove fmt * Always create EvalParams to evaluate a transaction group. We used to have an optimization to avoid creating EvalParams unless there was an app call in the transaction group. But the interface to allow transaction processing to communicate changes into the EvalParams is complicated by that (we must only do it if there is one!) This also allows us to use the same construction function for eps created for app and logic evaluation, simplifying dry-run and debugger. The optimization is less needed now anyway: 1) The ep is now shared for the whole group, so it's only one. 2) The ep is smaller now, as we only store nil pointers instead of larger scratch space objects for non-app calls. * Correct mistaken commit * ledger: perform the catchpoint writing outside the trackers lock. (#3311) ## Summary This PR moves the catchpoint file writing to be performed outside of the trackers lock. This resolves the issue where a long catchpoint file writing blocks the agreement from validating and propagating votes. ## Test Plan * [x] Test manually & use existing tests. * [x] Implement a unit test * [x] Deploy a local network where the catchpoint writing takes a long time and verify it doesn't get blocked during catchpoint writing. * Bump version number * Update `goal app method` handling of args and return values (#3352) * Fix method call arg overflow handling * Only check last log for return value * Address feedback * Add comment explaining ABI return prefix * Support app creation in `goal app method` (#3353) * Support app creation in `goal app method` * Don't use nonprintable tab character * Link to specific gist version * Fix error messages * Rename `methodCreation` to `methodCreatesApp` * Spec improvments * Update license to 2022 (#3360) Update license on all source files to 2022. * Add totals checks into acct updates tests (#3367) ## Summary After #2922 there is some leftover unused code for totals calculations. Turned this code into actual asserts. ## Test Plan This is tests update * More spec improvments, including resource "availability" * Recursively return inner transaction tree * Lint * No need for ConfirmedRound, so don't deref a nil pointer! * remove buildnumber.dat * license check * Shut up, dawg. * PKI State Proof Incremental Key Loading (#3281) ## Summary Followup to #3261 (contained in diff). Use the new key loading routine from the REST API. ## Test Plan New unit tests. * Limit number of simultaneous REST connections (#3326) ## Summary This PR limits the number of simultaneous REST connections we process to prevent the exhaustion of resources and ultimately a crash. Two limits are introduced: soft and hard. When the soft limit is exceeded, new connections are returned the 429 Too Many Requests http code. When the hard limit is exceeded, new connections are accepted and immediately closed. Partially resolves https://github.com/algorand/go-algorand-internal/issues/1814. ## Test Plan Added unit tests. * Use rejecting limit listener in WebsocketNetwork. (#3380) ## Summary Replace the standard limit listener with the new rejecting limit listener in `WebsocketNetwork`. This will let the dialing node know that connection is impossible faster. ## Test Plan Probably not necessary. * Delete unused constant. (#3379) ## Summary This PR deletes an unused constant. ## Test Plan None. * Add test to exercise lookup corner cases (#3376) ## Summary This test attempts to cover the case when an accountUpdates.lookupX method can't find the requested address, falls through looking at deltas and the LRU accounts cache, then hits the database — only to discover that the round stored in the database (committed in `accountUpdates.commitRound`) is out of sync with `accountUpdates.cachedDBRound` (updated a little bit later in `accountUpdates.postCommit`). In this case, the lookup method waits and tries again, iterating the `for { }` it is in. We did not have coverage for this code path before. ## Test Plan Adds new test. * Test for catchup stop on completion (#3306) Adding a test for the fix in #3299 ## Test Plan This is a test * Delete unused AtomicCommitWriteLock(). (#3383) ## Summary This PR deletes unused `AtomicCommitWriteLock()` and simplifies code. ## Test Plan None. Co-authored-by: DevOps Service <devops-service@algorand.com> Co-authored-by: Jason Paulos <jasonpaulos@users.noreply.github.com> Co-authored-by: chris erway <51567+cce@users.noreply.github.com> Co-authored-by: Shant Karakashian <55754073+algonautshant@users.noreply.github.com> Co-authored-by: John Lee <64482439+algojohnlee@users.noreply.github.com> Co-authored-by: Will Winder <wwinder.unh@gmail.com> Co-authored-by: Ben Guidarelli <ben.guidarelli@gmail.com> Co-authored-by: Pavel Zbitskiy <65323360+algorandskiy@users.noreply.github.com> Co-authored-by: Jack <87339414+algojack@users.noreply.github.com> Co-authored-by: John Lee <john.lee@algorand.com> Co-authored-by: algobarb <78746954+algobarb@users.noreply.github.com> Co-authored-by: Zeph Grunschlag <tzaffi@users.noreply.github.com> Co-authored-by: Fabrice Benhamouda <fabrice.benhamouda@normalesup.org> Co-authored-by: Tsachi Herman <tsachi.herman@algorand.com> Co-authored-by: Tolik Zinovyev <tolik@algorand.com> Co-authored-by: egieseke <eric_gieseke@yahoo.com> Co-authored-by: Quentin Kniep <kniepque@hu-berlin.de> Co-authored-by: Hang Su <87964331+ahangsu@users.noreply.github.com> Co-authored-by: Or Aharonee <or.aharonee@algorand.com> Co-authored-by: Barbara Poon <barbara.poon@algorand.com> * Feature/c2c temp (#3392) * Update the Version, BuildNumber, genesistimestamp.data * Support transaction arguments for `goal app method` (#3233) * Implement transactions as arguments * Fix indexing and dryrun issue * Add docstring * Satisfy review dog * Fix pointer issue * Fix group command * Rename e2e test * Fix filename variable * Add e2e test * Use tab * CI: use libboost-math-dev instead of libboost-all-dev (#3223) ## Summary Small change: libboost-math-dev requires just 4 packages to install, while libboost-all-dev requires > 100. Only Debian/Ubuntu distributions provide fine-grained boost packages like this, but should shave a little time off the CI builds. (Our only boost include is boost/math/distributions/binomial.hpp.) ## Test Plan Builds should pass as before. Now that we are no longer using Travis for Linux builds, the side effect of libboost-all-dev installing make and other missing build tools on Travis encountered in #2717 is no longer a concern. * testing: fixes to rest-participation-key e2e test (#3238) ## Summary - Test to make sure RES has the right input before counting line numbers for result size. - Rest RES to empty so that the same output is not recycled in case of an error. - exit 1 in case of an error - Reduce LAST_ROUND from 1200000 to 120 - "Get List of Keys" before getting NUM_IDS_3 otherwise it will recycle old RES value. * testing: interactive mode for e2e testing (#3227) ## Summary Some e2e tests require a python environment for testing. Unfortunately, setting up that environment adequately similar to the testing environment may not be trivial. This change introduces an interactive mode to the e2e.sh script which stops at the point of running the tests, and allows the user to run the tests from the same testing environment. ## Test Plan No tests needed. Tested the script locally. * Make dev-mode tests less flaky. (#3252) ## Summary Fix a couple flaws in the new go-e2e tests built ontop of DevMode: * Shutdown the fixture when finished. * Don't run in parallel. * Longer delays / better algorithms to wait for data flushing to complete. * Check for "out of order" keys. ## Test Plan N/A, this is a test. * adding libtool to ubuntu deps (#3251) ## Summary The sandbox is not building with dev config using master branch https://github.com/algorand/sandbox/issues/85, complains about libtool not being installed Guessing from this change https://github.com/algorand/go-algorand/pull/3223 Adding libtool to UBUNTU_DEPS in install scripts ## Test Plan Set config in sandbox to my branch `sandbox up dev` It built * Fix error shadowing in Eval (#3258) ## Summary Error from account preloading was shadowed by returning a wrong err variable. This caused subsequent problems in account updates and masked the original failure. ## Test Plan Use existing tests * Disable flaky test. (#3256) ## Summary This test doesn't work properly, disable it until #3255 addresses any underlying problems. * Update the Version, BuildNumber, genesistimestamp.data * Fix a data race in app tests (#3269) ## Summary A test helper function `commitRound` accessed `l.trackers.lastFlushTime` without taking a lock. Fixed. ## Test Plan ``` go test ./ledger -run TestAppEmpty -race -count=50 ok github.com/algorand/go-algorand/ledger 4.078s ``` * Fix e2e.sh mixed indent characters. (#3266) ## Summary Fix e2e.sh mixed indent characters. * Fix ParticipationKeyResponse type. (#3264) ## Summary Fix a small type discrepancy in the OpenAPI spec ahead of some other work that's about to happen. * disable parallelism for e2e-go tests (#3242) ## Summary This sets `-p 1` for the e2e-go tests, intended to make them more deterministic when running on a VM with relatively constrained resources. Since each e2e-go test might spin up a few nodes, it seems like it would help to avoid resource contention. ## Test Plan Tests should run as before. Desired effect can be verified by looking at the test output where the value of PARALLEL_FLAG is printed out before tests are run. * Updating Readme.md with circleci status badges (#3245) * Fix formatting for CircleCI badges (#3272) * Add Custom Scenario for Performance Testing (#3278) Add Custom Scenario for Performance Testing. Add README on how to run custom scenario and modify create_and_deploy_recipe.sh to accept a network template that will generate a new recipe. * ParticipationRegistry - StateProof loading methods (#3261) ## Summary Add ParticipationRegistry methods for setting and retrieving state proof keys. Since they aren't in master yet there is a `type StateProofKey []byte` stub which will need to be updated later. ## Test Plan New unit tests. * Op base64 decode (#3220) b64 opcode, tests, and specs * Bump Version, Remove buildnumber.dat and genesistimestamp.dat files. * Change golang version to 1.16.11 in go-algorand (#2825) Upgrading to 1.16 to help alleviate issues with working on different go versions, and update to a supported, more secure version. Release notes for Go 1.15 and 1.16: https://tip.golang.org/doc/go1.16 https://tip.golang.org/doc/go1.15 * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * TestEcdsa: fix flaky "tampering" of public key (#3282) ## Summary This test (TestEcdsa) tests the ecdsa_pk_decompress opcode and intentionally "tampers" with the public key by setting the first byte to zero. Occasionally this test is failing, likely because the first byte was already zero. (The test failures are for the cases where failure is expected, `pass=false`) ## Test Plan Existing test should pass, occasional flakiness should go away. * Support reference types in `goal app method` (#3275) * Fix method signature parse bug * Support reference types * Review dog fixes * Fix comments * Add a hash prefix for ARCs-related hashes (#3298) ## Summary This is to allow ARCs (github.com/algorandfoundation/ARCs) to have their own hash prefix without risk of collision. ## Test Plan It is purely informational. There is no real code change. * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Bump buildnumber.dat * testing: disable flaky test (#3268) Disable a flaky test, to be re-enabled later with #3267. * enumerate conditions that might cause this fetchAndWrite to return false (#3301) ## Summary The fetchAndWrite function contains some complex logic to ultimately determine if we should continue trying to catch up. The conditions that might cause it to return false should be more explicitly enumerated. ## Test Plan Just comments * make sure the block service is not attempting to access the ledger after being stopped. (#3303) ## Summary The block service was attempting to serve block via the http handler even after it has been stopped. This lead to undesired downstream failures in the ledger, which was shutdown as well. ## Test Plan unit test added. * Avoid creating algod process for the sole purpose of retrieving the genesis-id. (#3308) ## Summary Avoid creating algod process for the sole purpose of retrieving the genesis-id. Existing code was calling `algod -G -d <data dir>` in order to obtain the genesis version string. The genesis version string can be easily retrieved by loading the genesis file. ## Test Plan Use existing e2e tests. * documentation: fix algorand specs link (#3309) ## Summary This PR fixes a link in a README. ## Testing I clicked on the new link. * testing: reword partitiontest lint message. (#3297) ## Summary The wording on this was tripping me, maybe I was having an off day. I think it would be slightly easier if the message were to tell exactly what you need to do (and not use the angle brackets). * testing: fix random data race in TestAppAccountDataStorage (#3315) fix random data race in unit test * ledger: perform the catchpoint writing outside the trackers lock. (#3311) ## Summary This PR moves the catchpoint file writing to be performed outside of the trackers lock. This resolves the issue where a long catchpoint file writing blocks the agreement from validating and propagating votes. ## Test Plan * [x] Test manually & use existing tests. * [x] Implement a unit test * [x] Deploy a local network where the catchpoint writing takes a long time and verify it doesn't get blocked during catchpoint writing. * Separate tx and key validity for `goal account renewpartkey` (#3286) Always use currentRound+proto.MaxTxnLife as last valid round for the transaction when renewing instead of using the partkey validity period. This fixes #3283 * Add qkniep to THANKS.md (#3320) ## Summary Add qkniep to THANKS.md * Followup to opcode base64_decode (#3288) * alphabet begone in favor of encoding * unit test various padding and whitespace scenarios * padding permutations also fail * "Slicing" --> "Manipulation" * fix the codegen fail? * Documenting padding, whitespace, other character behavior * Add help and fish mode to e2e interactive mode. (#3313) ## Summary Minor improvements to e2e.sh interactive mode: * add to -h output * do not run start stop test in interactive mode * support fish shell ## Test Plan Manual testing: ``` ~$ ./e2e.sh -i ... lots of output removed ... ********** READY ********** The test environment is now set. You can now run tests in another terminal. Configure the environment: set -g VIRTUAL_ENV "/home/will/go/src/github.com/algorand/go-algorand/tmp/out/e2e/130013-1639576513257/ve" set -g PATH "$VIRTUAL_ENV/bin:$PATH" python3 "/home/will/go/src/github.com/algorand/go-algorand/test/scripts"/e2e_client_runner.py "/home/will/go/src/github.com/algorand/go-algorand/test/scripts"/e2e_subs/SCRIPT_FILE_NAME Press enter to shut down the test environment... ``` * Minimum Account Balance in Algod (#3287) * Add convertAddress tool. (#3304) ## Summary New tool: convertAddress I share this tool with someone every few months, putting it in the repo along with some documentation should make it easier to share and encourage people to share it amongst themselves if it's useful. Merge `debug` into `tools` to make it easier to organize these miscellaneous tools. * tealdbg: increase intermediate reading/writing buffers (#3335) ## Summary Some large teal source files cause the tealdbg/cdt session to choke. Upping the buffer size to allow for larger source files. closes #3100 ## Test Plan Run tealdbg with a large source teal file, ensure the source file makes it to cdt without choking. * Adding method pseudo op to readme (#3338) * add config.DeadlockDetectionThreshold (#3339) Summary This allows for the deadlock detection threshold to be set by configuration. Test Plan Existing tests should pass. * Fix flaky test in randomized ABI encoding test (#3346) * update abi encoding test random testcase generator, scale down parameters to avoid flaky test * parameterized test script * add notes to explain why flaky test is eliminated * show more information from self-roundtrip testing * fully utilize require, remove fmt * ledger: perform the catchpoint writing outside the trackers lock. (#3311) ## Summary This PR moves the catchpoint file writing to be performed outside of the trackers lock. This resolves the issue where a long catchpoint file writing blocks the agreement from validating and propagating votes. ## Test Plan * [x] Test manually & use existing tests. * [x] Implement a unit test * [x] Deploy a local network where the catchpoint writing takes a long time and verify it doesn't get blocked during catchpoint writing. * Bump version number * Update `goal app method` handling of args and return values (#3352) * Fix method call arg overflow handling * Only check last log for return value * Address feedback * Add comment explaining ABI return prefix * Support app creation in `goal app method` (#3353) * Support app creation in `goal app method` * Don't use nonprintable tab character * Link to specific gist version * Fix error messages * Rename `methodCreation` to `methodCreatesApp` * Update license to 2022 (#3360) Update license on all source files to 2022. * Add totals checks into acct updates tests (#3367) ## Summary After #2922 there is some leftover unused code for totals calculations. Turned this code into actual asserts. ## Test Plan This is tests update * remove buildnumber.dat * PKI State Proof Incremental Key Loading (#3281) ## Summary Followup to #3261 (contained in diff). Use the new key loading routine from the REST API. ## Test Plan New unit tests. * Limit number of simultaneous REST connections (#3326) ## Summary This PR limits the number of simultaneous REST connections we process to prevent the exhaustion of resources and ultimately a crash. Two limits are introduced: soft and hard. When the soft limit is exceeded, new connections are returned the 429 Too Many Requests http code. When the hard limit is exceeded, new connections are accepted and immediately closed. Partially resolves https://github.com/algorand/go-algorand-internal/issues/1814. ## Test Plan Added unit tests. * Use rejecting limit listener in WebsocketNetwork. (#3380) ## Summary Replace the standard limit listener with the new rejecting limit listener in `WebsocketNetwork`. This will let the dialing node know that connection is impossible faster. ## Test Plan Probably not necessary. * Delete unused constant. (#3379) ## Summary This PR deletes an unused constant. ## Test Plan None. * Add test to exercise lookup corner cases (#3376) ## Summary This test attempts to cover the case when an accountUpdates.lookupX method can't find the requested address, falls through looking at deltas and the LRU accounts cache, then hits the database — only to discover that the round stored in the database (committed in `accountUpdates.commitRound`) is out of sync with `accountUpdates.cachedDBRound` (updated a little bit later in `accountUpdates.postCommit`). In this case, the lookup method waits and tries again, iterating the `for { }` it is in. We did not have coverage for this code path before. ## Test Plan Adds new test. * Test for catchup stop on completion (#3306) Adding a test for the fix in #3299 ## Test Plan This is a test * Delete unused AtomicCommitWriteLock(). (#3383) ## Summary This PR deletes unused `AtomicCommitWriteLock()` and simplifies code. ## Test Plan None. Co-authored-by: DevOps Service <devops-service@algorand.com> Co-authored-by: Jason Paulos <jasonpaulos@users.noreply.github.com> Co-authored-by: chris erway <51567+cce@users.noreply.github.com> Co-authored-by: Shant Karakashian <55754073+algonautshant@users.noreply.github.com> Co-authored-by: John Lee <64482439+algojohnlee@users.noreply.github.com> Co-authored-by: Will Winder <wwinder.unh@gmail.com> Co-authored-by: Ben Guidarelli <ben.guidarelli@gmail.com> Co-authored-by: Pavel Zbitskiy <65323360+algorandskiy@users.noreply.github.com> Co-authored-by: Jack <87339414+algojack@users.noreply.github.com> Co-authored-by: John Lee <john.lee@algorand.com> Co-authored-by: algobarb <78746954+algobarb@users.noreply.github.com> Co-authored-by: Zeph Grunschlag <tzaffi@users.noreply.github.com> Co-authored-by: Fabrice Benhamouda <fabrice.benhamouda@normalesup.org> Co-authored-by: Tsachi Herman <tsachi.herman@algorand.com> Co-authored-by: Tolik Zinovyev <tolik@algorand.com> Co-authored-by: egieseke <eric_gieseke@yahoo.com> Co-authored-by: Quentin Kniep <kniepque@hu-berlin.de> Co-authored-by: Hang Su <87964331+ahangsu@users.noreply.github.com> Co-authored-by: Or Aharonee <or.aharonee@algorand.com> Co-authored-by: Barbara Poon <barbara.poon@algorand.com> * Feature/contract to contract (#3394) * Update the Version, BuildNumber, genesistimestamp.data * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * Support transaction arguments for `goal app method` (#3233) * Implement transactions as arguments * Fix indexing and dryrun issue * Add docstring * Satisfy review dog * Fix pointer issue * Fix group command * Rename e2e test * Fix filename variable * Add e2e test * Use tab * CI: use libboost-math-dev instead of libboost-all-dev (#3223) ## Summary Small change: libboost-math-dev requires just 4 packages to install, while libboost-all-dev requires > 100. Only Debian/Ubuntu distributions provide fine-grained boost packages like this, but should shave a little time off the CI builds. (Our only boost include is boost/math/distributions/binomial.hpp.) ## Test Plan Builds should pass as before. Now that we are no longer using Travis for Linux builds, the side effect of libboost-all-dev installing make and other missing build tools on Travis encountered in #2717 is no longer a concern. * testing: fixes to rest-participation-key e2e test (#3238) ## Summary - Test to make sure RES has the right input before counting line numbers for result size. - Rest RES to empty so that the same output is not recycled in case of an error. - exit 1 in case of an error - Reduce LAST_ROUND from 1200000 to 120 - "Get List of Keys" before getting NUM_IDS_3 otherwise it will recycle old RES value. * testing: interactive mode for e2e testing (#3227) ## Summary Some e2e tests require a python environment for testing. Unfortunately, setting up that environment adequately similar to the testing environment may not be trivial. This change introduces an interactive mode to the e2e.sh script which stops at the point of running the tests, and allows the user to run the tests from the same testing environment. ## Test Plan No tests needed. Tested the script locally. * Make dev-mode tests less flaky. (#3252) ## Summary Fix a couple flaws in the new go-e2e tests built ontop of DevMode: * Shutdown the fixture when finished. * Don't run in parallel. * Longer delays / better algorithms to wait for data flushing to complete. * Check for "out of order" keys. ## Test Plan N/A, this is a test. * adding libtool to ubuntu deps (#3251) ## Summary The sandbox is not building with dev config using master branch https://github.com/algorand/sandbox/issues/85, complains about libtool not being installed Guessing from this change https://github.com/algorand/go-algorand/pull/3223 Adding libtool to UBUNTU_DEPS in install scripts ## Test Plan Set config in sandbox to my branch `sandbox up dev` It built * Fix error shadowing in Eval (#3258) ## Summary Error from account preloading was shadowed by returning a wrong err variable. This caused subsequent problems in account updates and masked the original failure. ## Test Plan Use existing tests * Disable flaky test. (#3256) ## Summary This test doesn't work properly, disable it until #3255 addresses any underlying problems. * Update the Version, BuildNumber, genesistimestamp.data * Fix a data race in app tests (#3269) ## Summary A test helper function `commitRound` accessed `l.trackers.lastFlushTime` without taking a lock. Fixed. ## Test Plan ``` go test ./ledger -run TestAppEmpty -race -count=50 ok github.com/algorand/go-algorand/ledger 4.078s ``` * Fix e2e.sh mixed indent characters. (#3266) ## Summary Fix e2e.sh mixed indent characters. * Fix ParticipationKeyResponse type. (#3264) ## Summary Fix a small type discrepancy in the OpenAPI spec ahead of some other work that's about to happen. * disable parallelism for e2e-go tests (#3242) ## Summary This sets `-p 1` for the e2e-go tests, intended to make them more deterministic when running on a VM with relatively constrained resources. Since each e2e-go test might spin up a few nodes, it seems like it would help to avoid resource contention. ## Test Plan Tests should run as before. Desired effect can be verified by looking at the test output where the value of PARALLEL_FLAG is printed out before tests are run. * Updating Readme.md with circleci status badges (#3245) * Fix formatting for CircleCI badges (#3272) * Add Custom Scenario for Performance Testing (#3278) Add Custom Scenario for Performance Testing. Add README on how to run custom scenario and modify create_and_deploy_recipe.sh to accept a network template that will generate a new recipe. * Impose limits on the entire "tree" of inner calls. This also increases the realism of testing of multiple app calls in a group by creating the EvalParams with the real constructor, thus getting the pooling stuff tested here without playing games manipulating the ep after construction. * ParticipationRegistry - StateProof loading methods (#3261) ## Summary Add ParticipationRegistry methods for setting and retrieving state proof keys. Since they aren't in master yet there is a `type StateProofKey []byte` stub which will need to be updated later. ## Test Plan New unit tests. * Op base64 decode (#3220) b64 opcode, tests, and specs * Bump Version, Remove buildnumber.dat and genesistimestamp.dat files. * Change golang version to 1.16.11 in go-algorand (#2825) Upgrading to 1.16 to help alleviate issues with working on different go versions, and update to a supported, more secure version. Release notes for Go 1.15 and 1.16: https://tip.golang.org/doc/go1.16 https://tip.golang.org/doc/go1.15 * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * TestEcdsa: fix flaky "tampering" of public key (#3282) ## Summary This test (TestEcdsa) tests the ecdsa_pk_decompress opcode and intentionally "tampers" with the public key by setting the first byte to zero. Occasionally this test is failing, likely because the first byte was already zero. (The test failures are for the cases where failure is expected, `pass=false`) ## Test Plan Existing test should pass, occasional flakiness should go away. * Move appID tracking into EvalContext, out of LedgerForLogic This change increases the seperation between AVM execution and the ledger being used to lookup resources. Previously, the ledger kept track of the appID being executed, to offer a narrower interface to those resources. But now, with app-to-app calls, the appID being executed must change, and the AVM needs to maintain the current appID. * Stupid linter * Support reference types in `goal app method` (#3275) * Fix method signature parse bug * Support reference types * Review dog fixes * Fix comments * Add a hash prefix for ARCs-related hashes (#3298) ## Summary This is to allow ARCs (github.com/algorandfoundation/ARCs) to have their own hash prefix without risk of collision. ## Test Plan It is purely informational. There is no real code change. * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Compatibility mode for partkeyinfo. (#3291) ## Summary Compatibility for `partkeyinfo` was also needed by some users. In addition to the different format, the old command also allows printing key information when the node is not running Workarounds: 1) use an older `goal` binary. 2) use `algokey part info --keyfile <file>` ## Test Plan Tested manually: ``` ~$ goal account partkeyinfo -d /tmp/private_network/Node/ Dumping participation key info from /tmp/private_network/Node/... Participation ID: CPLHRU3WEY3PE7XTPPSIE7BGJYWAIFPS7DL3HZNC4OKQRQ5YAYUA Parent address: DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU Last vote round: 1 Last block proposal round: 2 Effective first round: 1 Effective last round: 3000000 First round: 0 Last round: 3000000 Key dilution: 10000 Selection key: 5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ= Voting key: PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs= ~$ goal account partkeyinfo -d /tmp/private_network/Node/ -c Dumping participation key info from /tmp/private_network/Node/... ------------------------------------------------------------------ File: Wallet2.0.3000000.partkey { "acct": "DGS6VNX2BRMKGKVAS2LTREMYG33TOCYPFLPCQ3DUTJULQU6P6S7KJCDNTU", "last": 3000000, "sel": "5QRrTgzSUTqqym43QVsBus1/AOwGR5zE+I7FGwA14vQ=", "vote": "PK0NMyZ4BKSjPQ9JuT7dQBLdTpjLQv2txuDYDKhkuqs=", "voteKD": 10000 } ``` * catchup: suspend the catchup session once the agreement service kicks in (#3299) The catchup service stops when it is complete, i.e. it has reached up to the round which is being agreed on. The catchup service knows it is complete and should stop, when it finds that a block is in the ledger before it adds it. In other words, apart from the catchup, only the agreement adds blocks to the ledger. And when the agreement adds a block to the ledger before the catchup, it means the agreement is ahead, and the catchup is complete. When `fetchAndWrite` detects the block is already in the ledger, it returns. The return value of `false` stops the catchup syncing. In previous releases, `fetchAndWrite` was only checking if the block is already in the ledger after attempting to fetch it. Since it fails to fetch a block not yet agreed on, the fetch fails after multiple attempts, and `fetchAndWrite` returns `false` ending the catchup. A recent change made this process more efficient by first checking if the block is in the ledger before/during the fetch. However, once the block was found in the ledger, `fetchAndWrite` returned true instead of false (consistent with already existing logic since forever, which was also wrong). This caused the catchup to continue syncing after catchup was complete. This change fixes the return value from true to false. * Bump buildnumber.dat * testing: disable flaky test (#3268) Disable a flaky test, to be re-enabled later with #3267. * enumerate conditions that might cause this fetchAndWrite to return false (#3301) ## Summary The fetchAndWrite function contains some complex logic to ultimately determine if we should continue trying to catch up. The conditions that might cause it to return false should be more explicitly enumerated. ## Test Plan Just comments * Fix unit tests error messages * make sure the block service is not attempting to access the ledger after being stopped. (#3303) ## Summary The block service was attempting to serve block via the http handler even after it has been stopped. This lead to undesired downstream failures in the ledger, which was shutdown as well. ## Test Plan unit test added. * Avoid creating algod process for the sole purpose of retrieving the genesis-id. (#3308) ## Summary Avoid creating algod process for the sole purpose of retrieving the genesis-id. Existing code was calling `algod -G -d <data dir>` in order to obtain the genesis version string. The genesis version string can be easily retrieved by loading the genesis file. ## Test Plan Use existing e2e tests. * documentation: fix algorand specs link (#3309) ## Summary This PR fixes a link in a README. ## Testing I clicked on the new link. * testing: reword partitiontest lint message. (#3297) ## Summary The wording on this was tripping me, maybe I was having an off day. I think it would be slightly easier if the message were to tell exactly what you need to do (and not use the angle brackets). * testing: fix random data race in TestAppAccountDataStorage (#3315) fix random data race in unit test * Allow access to resources created in the same transaction group The method will be reworked, but the tests are correct and want to get them visible to team. * ledger: perform the catchpoint writing outside the trackers lock. (#3311) ## Summary This PR moves the catchpoint file writing to be performed outside of the trackers lock. This resolves the issue where a long catchpoint file writing blocks the agreement from validating and propagating votes. ## Test Plan * [x] Test manually & use existing tests. * [x] Implement a unit test * [x] Deploy a local network where the catchpoint writing takes a long time and verify it doesn't get blocked during catchpoint writing. * Separate tx and key validity for `goal account renewpartkey` (#3286) Always use currentRound+proto.MaxTxnLife as last valid round for the transaction when renewing instead of using the partkey validity period. This fixes #3283 * Add qkniep to THANKS.md (#3320) ## Summary Add qkniep to THANKS.md * Followup to opcode base64_decode (#3288) * alphabet begone in favor of encoding * unit test various padding and whitespace scenarios * padding permutations also fail * "Slicing" --> "Manipulation" * fix the codegen fail? * Documenting padding, whitespace, other character behavior * Add help and fish mode to e2e interactive mode. (#3313) ## Summary Minor improvements to e2e.sh interactive mode: * add to -h output * do not run start stop test in interactive mode * support fish shell ## Test Plan Manual testing: ``` ~$ ./e2e.sh -i ... lots of output removed ... ********** READY ********** The test environment is now set. You can now run tests in another terminal. Configure the environment: set -g VIRTUAL_ENV "/home/will/go/src/github.com/algorand/go-algorand/tmp/out/e2e/130013-1639576513257/ve" set -g PATH "$VIRTUAL_ENV/bin:$PATH" python3 "/home/will/go/src/github.com/algorand/go-algorand/test/scripts"/e2e_client_runner.py "/home/will/go/src/github.com/algorand/go-algorand/test/scripts"/e2e_subs/SCRIPT_FILE_NAME Press enter to shut down the test environment... ``` * Minimum Account Balance in Algod (#3287) * Access to apps created in group Also adds some tests that are currently skipped for testing - access to addresses of newly created apps - use of gaid in inner transactions Both require some work to implement the thing being tested. * Remove tracked created mechanism in favor of examining applydata. * Add convertAddress tool. (#3304) ## Summary New tool: convertAddress I share this tool with someone every few months, putting it in the repo along with some documentation should make it easier to share and encourage people to share it amongst themselves if it's useful. Merge `debug` into `tools` to make it easier to organize these miscellaneous tools. * tealdbg: increase intermediate reading/writing buffers (#3335) ## Summary Some large teal source files cause the tealdbg/cdt session to choke. Upping the buffer size to allow for larger source files. closes #3100 ## Test Plan Run tealdbg with a large source teal file, ensure the source file makes it to cdt without choking. * Adding method pseudo op to readme (#3338) * Allow v6 AVM code to use in-group created asas, apps (& their accts) One exception - apps can not mutate (put or del) keys from the app accounts, because EvalDelta cannot encode such changes. * lint docs * typo * The review dog needs obedience training. * add config.DeadlockDetectionThreshold (#3339) Summary This allows for the deadlock detection threshold to be set by configuration. Test Plan Existing tests should pass. * Use one EvalParams for logic evals, another for apps in dry run We used to use one ep per transaction, shared between sig and and app. But the new model of ep usage is to keep using one while evaluating an entire group. The app ep is now built logic.NewAppEvalParams which, hopefully, will prevent some bugs when we change something in the EvalParams and don't reflect it in what was a "raw" EvalParams construction in debugger and dry run. * Use logic.NewAppEvalParams to decrease copying and bugs in debugger * Simplify use of NewEvalParams. No more nil return when no apps. This way, NewEvalParams can be used for all creations of EvalParams, whether they are intended for logicsig or app use, greatly simplifying the way we make them for use by dry run or debugger (where they serve double duty). * Remove explicit PastSideEffects handling in tealdbg * Fix flaky test in randomized ABI encoding test (#3346) * update abi encoding test random testcase generator, scale down parameters to avoid flaky test * parameterized test script * add notes to explain why flaky test is eliminated * show more information from self-roundtrip testing * fully utilize require, remove fmt * Always create EvalParams to evaluate a transaction group. We used to have an optimization to avoid creating EvalParams unless there was an app call in the transaction group. But the interface to allow transaction processing to communicate changes into the EvalParams is complicated by that (we must only do it if there is one!) This also allows us to use the same construction function for eps created for app and logic evaluation, simplifying dry-run and debugger. The optimization is less needed now anyway: 1) The ep is now shared for the whole group, so it's only one. 2) The ep is smaller now, as we only store nil pointers instead of larger scratch space objects for non-app calls. * Correct mistaken commit * ledger: perform the catchpoint writing outside the trackers lock. (#3311) ## Summary This PR moves the catchpoint file writing to be performed outside of the trackers lock. This resolves the issue where a long catchpoint file writing blocks the agreement from validating and propagating votes. ## Test Plan * [x] Test manually & use existing tests. * [x] Implement a unit test * [x] Deploy a local network where the catchpoint writing takes a long time and verify it doesn't get blocked during catchpoint writing. * Bump version number * Update `goal app method` handling of args and return values (#3352) * Fix method call arg overflow handling * Only check last log for return value * Address feedback * Add comment explaining ABI return prefix * Support app creation in `goal app method` (#3353) * Support app creation in `goal app method` * Don't use nonprintable tab character * Link to specific gist version * Fix error messages * Rename `methodCreation` to `methodCreatesApp` * Spec improvments * Update license to 2022 (#3360) Update license on all source files to 2022. * Add totals checks into acct updates tests (#3367) ## Summary After #2922 there is some leftover unused code for totals calculations. Turned this code into actual asserts. ## Test Plan This is tests update * More spec improvments, including resource "availability" * Recursively return inner transaction tree * Lint * No need for ConfirmedRound, so don't deref a nil pointer! * remove buildnumber.dat * license check * Shut up, dawg. * PKI State Proof Incremental Key Loading (#3281) ## Summary Followup to #3261 (contained in diff). Use the new key loading routine from the REST API. ## Test Plan New unit tests. * Limit number of simultaneous REST connections (#3326) ## Summary This PR limits the number of simultaneous REST connections we process to prevent the exhaustion of resources and ultimately a crash. Two limits are introduced: soft and hard. When the soft limit is exceeded, new connections are returned the 429 Too Many Requests http code. When the hard limit is exceeded, new connections are accepted and immediately closed. Partially resolves https://github.com/algorand/go-algorand-internal/issues/1814. ## Test Plan Added unit tests. * Use rejecting limit listener in WebsocketNetwork. (#3380) ## Summary Replace the standard limit listener with the new rejecting limit listener in `WebsocketNetwork`. This will let the dialing node know that connection is impossible faster. ## Test Plan Probably not necessary. * Delete unused constant. (#3379) ## Summary This PR deletes an unused constant. ## Test Plan None. * Add test to exercise lookup corner cases (#3376) ## Summary This test attempts to cover the case when an accountUpdates.lookupX method can't find the requested address, falls through looking at deltas and the LRU accounts cache, then hits the database — only to discover that the round stored in the database (committed in `accountUpdates.commitRound`) is out of sync with `accountUpdates.cachedDBRound` (updated a little bit later in `accountUpdates.postCommit`). In this case, the lookup method waits and tries again, iterating the `for { }` it is in. We did not have coverage for this code path before. ## Test Plan Adds new test. * Test for catchup stop on completion (#3306) Adding a test for the fix in #3299 ## Test Plan This is a test * Delete unused AtomicCommitWriteLock(). (#3383) ## Summary This PR deletes unused `AtomicCommitWriteLock()` and simplifies code. ## Test Plan None. Co-authored-by: DevOps Service <devops-service@algorand.com> Co-authored-by: Jason Paulos <jasonpaulos@users.noreply.github.com> Co-authored-by: chris erway <51567+cce@users.noreply.github.com> Co-authored-by: Shant Karakashian <55754073+algonautshant@users.noreply.github.com> Co-authored-by: John Lee <64482439+algojohnlee@users.noreply.github.com> Co-authored-by: Will Winder <wwinder.unh@gmail.com> Co-authored-by: Ben Guidarelli <ben.guidarelli@gmail.com> Co-authored-by: Pavel Zbitskiy <65323360+algorandskiy@users.noreply.github.com> Co-authored-by: Jack <87339414+algojack@users.noreply.github.com> Co-authored-by: John Lee <john.lee@algorand.com> Co-authored-by: algobarb <78746954+algobarb@users.noreply.github.com> Co-authored-by: Zeph Grunschlag <tzaffi@users.noreply.github.com> Co-authored-by: Fabrice Benhamouda <fabrice.benhamouda@normalesup.org> Co-authored-by: Tsachi Herman <tsachi.herman@algorand.com> Co-authored-by: Tolik Zinovyev <tolik@algorand.com> Co-authored-by: egieseke <eric_gieseke@yahoo.com> Co-authored-by: Quentin Kniep <kniepque@hu-berlin.de> Co-authored-by: Hang Su <87964331+ahangsu@users.noreply.github.com> Co-authored-by: Or Aharonee <or.aharonee@algorand.com> Co-authored-by: Barbara Poon <barbara.poon@algorand.com> * Feature/contract to contract (#3395) * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * Impose limits on the entire "tree" of inner calls. This also increases the realism of testing of multiple app calls in a group by creating the EvalParams with the real constructor, thus getting the pooling stuff tested here without playing games manipulating the ep after construction. * Move appID tracking into EvalContext, out of LedgerForLogic This change increases the seperation between AVM execution and the ledger being used to lookup resources. Previously, the ledger kept track of the appID being executed, to offer a narrower interface to those resources. But now, with app-to-app calls, the appID being executed must change, and the AVM needs to maintain the current appID. * Stupid linter * Fix unit tests error messages * Allow access to resources created in the same transaction group The method will be reworked, but the tests are correct and want to get them visible to team. * Access to apps created in group Also adds some tests that are currently skipped for testing - access to addresses of newly created apps - use of gaid in inner transactions Both require some work to implement the thing being tested. * Remove tracked created mechanism in favor of examining applydata. * Allow v6 AVM code to use in-group created asas, apps (& their accts) One exception - apps can not mutate (put or del) keys from the app accounts, because EvalDelta cannot encode such changes. * lint docs * typo * The review dog needs obedience training. * Use one EvalParams for logic evals, another for apps in dry run We used to use one ep per transaction, shared between sig and and app. But the new model of ep usage is to keep using one while evaluating an entire group. The app ep is now built logic.NewAppEvalParams which, hopefully, will prevent some bugs when we change something in the EvalParams and don't reflect it in what was a "raw" EvalParams construction in debugger and dry run. * Use logic.NewAppEvalParams to decrease copying and bugs in debugger * Simplify use of NewEvalParams. No more nil return when no apps. This way, NewEvalParams can be used for all creations of EvalParams, whether they are intended for logicsig or app use, greatly simplifying the way we make them for use by dry run or debugger (where they serve double duty). * Remove explicit PastSideEffects handling in tealdbg * Always create EvalParams to evaluate a transaction group. We used to have an optimization to avoid creating EvalParams unless there was an app call in the transaction group. But the interface to allow transaction processing to communicate changes into the EvalParams is complicated by that (we must only do it if there is one!) This also allows us to use the same construction function for eps created for app and logic evaluation, simplifying dry-run and debugger. The optimization is less needed now anyway: 1) The ep is now shared for the whole group, so it's only one. 2) The ep is smaller now, as we only store nil pointers instead of larger scratch space objects for non-app calls. * Correct mistaken commit * Spec improvments * More spec improvments, including resource "availability" * Recursively return inner transaction tree * Lint * No need for ConfirmedRound, so don't deref a nil pointer! * license check * Shut up, dawg. * base64 merge cleanup * Feature/contract to contract (#3401) * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * Impose limits on the entire "tree" of inner calls. This also increases the realism of testing of multiple app calls in a group by creating the EvalParams with the real constructor, thus getting the pooling stuff tested here without playing games manipulating the ep after construction. * Move appID tracking into EvalContext, out of LedgerForLogic This change increases the seperation between AVM execution and the ledger being used to lookup resources. Previously, the ledger kept track of the appID being executed, to offer a narrower interface to those resources. But now, with app-to-app calls, the appID being executed must change, and the AVM needs to maintain the current appID. * Stupid linter * Fix unit tests error messages * Allow access to resources created in the same transaction group The method will be reworked, but the tests are correct and want to get them visible to team. * Access to apps created in group Also adds some tests that are currently skipped for testing - access to addresses of newly created apps - use of gaid in inner transactions Both require some work to implement the thing being tested. * Remove tracked created mechanism in favor of examining applydata. * Allow v6 AVM code to use in-group created asas, apps (& their accts) One exception - apps can not mutate (put or del) keys from the app accounts, because EvalDelta cannot encode such changes. * lint docs * typo * The review dog needs obedience training. * Use one EvalParams for logic evals, another for apps in dry run We used to use one ep per transaction, shared between sig and and app. But the new model of ep usage is to keep using one while evaluating an entire group. The app ep is now built logic.NewAppEvalParams which, hopefully, will prevent some bugs when we change something in the EvalParams and don't reflect it in what was a "raw" EvalParams construction in debugger and dry run. * Use logic.NewAppEvalParams to decrease copying and bugs in debugger * Simplify use of NewEvalParams. No more nil return when no apps. This way, NewEvalParams can be used for all creations of EvalParams, whether they are intended for logicsig or app use, greatly simplifying the way we make them for use by dry run or debugger (where they serve double duty). * Remove explicit PastSideEffects handling in tealdbg * Always create EvalParams to evaluate a transaction group. We used to have an optimization to avoid creating EvalParams unless there was an app call in the transaction group. But the interface to allow transaction processing to communicate changes into the EvalParams is complicated by that (we must only do it if there is one!) This also allows us to use the same construction function for eps created for app and logic evaluation, simplifying dry-run and debugger. The optimization is less needed now anyway: 1) The ep is now shared for the whole group, so it's only one. 2) The ep is smaller now, as we only store nil pointers instead of larger scratch space objects for non-app calls. * Correct mistaken commit * Spec improvments * More spec improvments, including resource "availability" * Recursively return inner transaction tree * Lint * No need for ConfirmedRound, so don't deref a nil pointer! * license check * Shut up, dawg. * base64 merge cleanup * Feature/contract to contract (#3402) * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * Impose limits on the entire "tree" of inner calls. This also increases the realism of testing of multiple app calls in a group by creating the EvalParams with the real constructor, thus getting the pooling stuff tested here without playing games manipulating the ep after construction. * Move appID tracking into EvalContext, out of LedgerForLogic This change increases the seperation between AVM execution and the ledger being used to lookup resources. Previously, the ledger kept track of the appID being executed, to offer a narrower interface to those resources. But now, with app-to-app calls, the appID being executed must change, and the AVM needs to maintain the current appID. * Stupid linter * Fix unit tests error messages * Allow access to resources created in the same transaction group The method will be reworked, but the tests are correct and want to get them visible to team. * Access to apps created in group Also adds some tests that are currently skipped for testing - access to addresses of newly created apps - use of gaid in inner transactions Both require some work to implement the thing being tested. * Remove tracked created mechanism in favor of examining applydata. * Allow v6 AVM code to use in-group created asas, apps (& their accts) One exception - apps can not mutate (put or del) keys from the app accounts, because EvalDelta cannot encode such changes. * lint docs * typo * The review dog needs obedience training. * Use one EvalParams for logic evals, another for apps in dry run We used to use one ep per transaction, shared between sig and and app. But the new model of ep usage is to keep using one while evaluating an entire group. The app ep is now built logic.NewAppEvalParams which, hopefully, will prevent some bugs when we change something in the EvalParams and don't reflect it in what was a "raw" EvalParams construction in debugger and dry run. * Use logic.NewAppEvalParams to decrease copying and bugs in debugger * Simplify use of NewEvalParams. No more nil return when no apps. This way, NewEvalParams can be used for all creations of EvalParams, whether they are intended for logicsig or app use, greatly simplifying the way we make them for use by dry run or debugger (where they serve double duty). * Remove explicit PastSideEffects handling in tealdbg * Always create EvalParams to evaluate a transaction group. We used to have an optimization to avoid creating EvalParams unless there was an app call in the transaction group. But the interface to allow transaction processing to communicate changes into the EvalParams is complicated by that (we must only do it if there is one!) This also allows us to use the same construction function for eps created for app and logic evaluation, simplifying dry-run and debugger. The optimization is less needed now anyway: 1) The ep is now shared for the whole group, so it's only one. 2) The ep is smaller now, as we only store nil pointers instead of larger scratch space objects for non-app calls. * Correct mistaken commit * Spec improvments * More spec improvments, including resource "availability" * Recursively return inner transaction tree * Lint * No need for ConfirmedRound, so don't deref a nil pointer! * license check * Shut up, dawg. * base64 merge cleanup * Feature/contract to contract update (#3412) * testing: Fix unit test TestAsyncTelemetryHook_QueueDepth (#2685) Fix the unit test TestAsyncTelemetryHook_QueueDepth * Deprecate `FastPartitionRecovery` from `ConsensusParams` (#3386) ## Summary This PR removes `FastPartitionRecovery` option from consensus parameters. The code now acts as if this value is set to true. Closes https://github.com/algorand/go-algorand-internal/issues/1830. ## Test Plan None. * Remaking a PR for CI (#3398) * Allow setting manager, reserve, freeze, and clawback at goal asset create * Add e2e tests * Add more tests for goal asset create flags Co-authored-by: Fionna <fionnacst@gmail.com> * [Other] CircleCI pipeline change for binary uploads (#3381) For nightly builds ("rel/nightly"), we want to have deadlock enabled. For rel/beta and rel/stable, we want to make sure we can build and upload a binary with deadlock disabled so that it can be used for release testing and validation purposes. * signer.KeyDilution need not depend on config package (#3265) crypto package need not depend on config. There is an unnecessary dependency on config. signer.KeyDilution takes the `config.ConsensusParams` as argument to pick the DefaultKeyDilution from it. This introduces dependency from the crypto package to config package. Instead, only the DefaultKeyDilution value can be passed to signer.KeyDilution. * algodump is a tcpdump-like tool for algod's network protocol (#3166) This PR introduces algodump, a tcpdump-like tool for monitoring algod network messages. * Removing C/crypto dependencies from `data/abi` package (#3375) * Feature Networks pipeline related changes (#3393) Added support for not having certain files in signing script Co-authored-by: Tsachi Herman <tsachi.herman@algorand.com> Co-authored-by: Tolik Zinovyev <tolik@algorand.com> Co-authored-by: Jack <87339414+algojack@users.noreply.github.com> Co-authored-by: Fionna <fionnacst@gmail.com> Co-authored-by: algobarb <78746954+algobarb@users.noreply.github.com> Co-authored-by: Shant Karakashian <55754073+algonautshant@users.noreply.github.com> Co-authored-by: Nickolai Zeldovich <nickolai@csail.mit.edu> Co-authored-by: Hang Su <87964331+ahangsu@users.noreply.github.com> * c2c: bsqrt, acct_params_get (#3404) also an e2e inner appl test * Feature/contract to contract (#3418) * Integration tests for C2C * added stress test * undid incorrect autoformat * remove autogenerated catchpoints * Improve benchmarking * Allow 256 inners (#3422) * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * Impose limits on the entire "tree" of inner calls. This also increases the realism of testing of multiple app calls in a group by creating the EvalParams with the real constructor, thus getting the pooling stuff tested here without playing games manipulating the ep after construction. * Move appID tracking into EvalContext, out of LedgerForLogic This change increases the seperation between AVM execution and the ledger being used to lookup resources. Previously, the ledger kept track of the appID being executed, to offer a narrower interface to those resources. But now, with app-to-app calls, the appID being executed must change, and the AVM needs to maintain the current appID. * Stupid linter * Fix unit tests error messages * Allow access to resources created in the same transaction group The method will be reworked, but the tests are correct and want to get them visible to team. * Access to apps created in group Also adds some tests that are currently skipped for testing - access to addresses of newly created apps - use of gaid in inner transactions Both require some work to implement the thing being tested. * Remove tracked created mechanism in favor of examining applydata. * Allow v6 AVM code to use in-group created asas, apps (& their accts) One exception - apps can not mutate (put or del) keys from the app accounts, because EvalDelta cannot encode such changes. * lint docs * typo * The review dog needs obedience training. * Use one EvalParams for logic evals, another for apps in dry run We used to use one ep per transaction, shared between sig and and app. But the new model of ep usage is to keep using one while evaluating an entire group. The app ep is now built logic.NewAppEvalParams which, hopefully, will prevent some bugs when we change something in the EvalParams and don't reflect it in what was a "raw" EvalParams construction in debugger and dry run. * Use logic.NewAppEvalParams to decrease copying and bugs in debugger * Simplify use of NewEvalParams. No more nil return when no apps. This way, NewEvalParams can be used for all creations of EvalParams, whether they are intended for logicsig or app use, greatly simplifying the way we make them for use by dry run or debugger (where they serve double duty). * Remove explicit PastSideEffects handling in tealdbg * Always create EvalParams to evaluate a transaction group. We used to have an optimization to avoid creating EvalParams unless there was an app call in the transaction group. But the interface to allow transaction processing to communicate changes into the EvalParams is complicated by that (we must only do it if there is one!) This also allows us to use the same construction function for eps created for app and logic evaluation, simplifying dry-run and debugger. The optimization is less needed now anyway: 1) The ep is now shared for the whole group, so it's only one. 2) The ep is smaller now, as we only store nil pointers instead of larger scratch space objects for non-app calls. * Correct mistaken commit * Spec improvments * More spec improvments, including resource "availability" * Recursively return inner transaction tree * Lint * No need for ConfirmedRound, so don't deref a nil pointer! * license check * Shut up, dawg. * base64 merge cleanup * Remove the extraneous field type arrays. * bsqrt * acct_holding_get, a unified opcode for account field access * Thanks, dawg * CR and more spec simplification * e2e test for inner transaction appls * Give max group size * 16 inner txns, regardless of apps present * Adjust test for allowing 256 inners * merge before audit (#3431) * Three new globals for to help contract-to-contract usability * detritis * Check error * doc comments * Impose limits on the entire "tree" of inner calls. This also increases the realism of testing of multiple app calls in a group by creating the EvalParams with the real constructor, thus getting the pooling stuff tested here without playing games manipulating the ep after construction. * Move appID tracking into EvalContext, out of LedgerForLogic This change increases the seperation between AVM execution and the ledger being used to lookup resources. Previously, the ledger kept track of the appID being executed, to offer a narrower interface to those resources. But now, with app-to-app calls, the appID being executed must change, and the AVM needs to maintain the current appID. * Stupid linter * Fix unit tests error messages * Allow access to resources created in the same transaction group The method will be reworked, but the tests are correct and want to get them visible to team. * Access to apps created in group Also adds some tests that are currently skipped for testing - access to addresses of newly created apps - use of gaid in inner transactions Both require some work to implement the thing being tested. * Remove tracked created mechanism in favor of examining applydata. * Allow v6 AVM code to use in-group created asas, apps (& their accts) One exception - apps can not mutate (put or del) keys from the app accounts, because EvalDelta cannot encode such changes. * lint docs * typo * The review dog needs obedience training. * Use one EvalParams for logic evals, another for apps in dry run We used to use one ep per transaction, shared between sig and and app. But the new model of ep usage is to keep using one while evaluating an entire group. The app ep is now built logic.NewAppEvalParams which, hopefully, will prevent some bugs when we change something in the EvalParams and don't reflect it in what was a "raw" EvalParams construction in debugger and dry run. * Use logic.NewAppEvalParams to decrease copying and bugs in debugger * Simplify use of NewEvalParams. No more nil return when no apps. This way, NewEvalParams can be used for all creations of EvalParams, whether they are intended for logicsig or app use, greatly simplifying the way we make them for use by dry run or debugger (where they serve double duty). * Remove explicit PastSideEffects handling in tealdbg * Always create EvalParams to evaluate a transaction group. We used to have an optimization to avoid creating EvalParams unless there was an app call in the transaction group. But the interface to allow transaction processing to communicate changes into the EvalParams is complicated by that (we must only do it if there is one!) This also allows us to use the same construction function for eps created for app and logic evaluation, simplifying dry-run and debugger. The optimization is less needed now anyway: 1) The ep is now shared for the whole group, so it's only one. 2) The ep is smaller now, as we only store nil pointers instead of larger scratch space objects for non-app calls. * Correct mistaken commit * Spec improvments * More spec improvments, including resource "availability" * Recursively return inner transaction tree * Lint * No need for ConfirmedRound, so don't deref a nil pointer! * license check * Shut up, dawg. * testing: Fix unit test TestAsyncTelemetryHook_QueueDepth (#2685) Fix the unit test TestAsyncTelemetryHook_QueueDepth * Deprecate `FastPartitionRecovery` from `ConsensusParams` (#3386) ## Summary This PR removes `FastPartitionRecovery` option from consensus parameters. The code now acts as if this value is set to true. Closes https://github.com/algorand/go-algorand-internal/issues/1830. ## Test Plan None. * base64 merge cleanup * Remaking a PR for CI (#3398) * Allow setting manager, reserve, freeze, and clawback at goal asset create * Add e2e tests * Add more tests for goal asset create flags Co-authored-by: Fionna <fionnacst@gmail.com> * Remove the extraneous field type arrays. * bsqrt * acct_holding_get, a unified opcode for account field access * Thanks, dawg * [Other] CircleCI pipeline change for binary uploads (#3381) For nightly builds ("rel/nightly"), we want to have deadlock enabled. For rel/beta and rel/stable, we want to make sure we can build and upload a binary with deadlock disabled so that it can be used for release testing and validation purposes. * signer.KeyDilution need not depend on config package (#3265) crypto package need not depend on config. There is an unnecessary dependency on config. signer.KeyDilution takes the `config.ConsensusParams` as argument to pick the DefaultKeyDilution from it. This introduces dependency from the crypto package to config package. Instead, only the DefaultKeyDilution value can be passed to signer.KeyDilution. * CR and more spec simplification * algodump is a tcpdump-like tool for algod's network protocol (#3166) This PR introduces algodump, a tcpdump-like tool for monitoring algod network messages. * Removing C/crypto dependencies from `data/abi` package (#3375) * Feature Networks pipeline related changes (#3393) Added support for not having certain files in signing script * e2e test for inner transaction appls * testing: Add slightly more coverage to TestAcctUpdatesLookupRetry (#3384) Add slightly more coverage to TestAcctUpdatesLookupRetry * add context to (most) agreement logged writes (#3411) Current agreement code only writes a `context : agreement` to a subset of the logged messages. This change extends the said entry, which would make it easier to pre-process logs entries by their corresponding component. The change in this PR is focused on: 1. make sure that the "root" agreement logger always injects the `context : agreement` argument. 2. change the various locations in the agreement code to use the root agreement logger instead of referring to the application-global instance (`logging.Base()`). * network: faster node shutdown (#3416) During the node shutdown, all the current outgoing connections are being disconnected. Since these connections are web sockets, they require a close connection message to be sent. However, sending this message can take a while, and in situations where the other party has already shut down, we might never get a response. That, in turn, would lead the node waiting until the deadline is reached. The current deadline was 5 seconds. This PR changes the deadline during shutdown to be 50ms. * Give max group size * 16 inner txns, regardless of apps present * Adjust test for allowing 256 inners Co-authored-by: Tsachi Herman <tsachi.herman@algorand.com> Co-authored-by: Tolik Zinovyev <tolik@algorand.com> Co-authored-by: Jack <87339414+algojack@users.noreply.github.com> Co-authored-by: Fionna <fionnacst@gmail.com> Co-authored-by: algobarb <78746954+algobarb@users.noreply.github.com> Co-authored-by: Shant Karakashian <55754073+algonautshant@users.noreply.github.com> Co-authored-by: Nickolai Zeldovich <nickolai@csail.mit.edu> Co-authored-by: Hang Su <87964331+ahangsu@users.noreply.github.com> Co-authored-by: chris erway <51567+cce@users.noreply.github.com> Co-authored-by: DevOps Service <devops-service@algorand.com> Co-authored-by: Jason Paulos <jasonpaulos@users.noreply.github.com> Co-authored-by: chris erway <51567+cce@users.noreply.github.com> Co-authored-by: Shant Karakashian <55754073+algonautshant@users.noreply.github.com> Co-authored-by: John Lee <64482439+algojohnlee@users.noreply.github.com> Co-authored-by: Will Winder <wwinder.unh@gmail.com> Co-authored-by: Ben Guidarelli <ben.guidarelli@gmail.com> Co-authored-by: Pavel Zbitskiy <65323360+algorandskiy@users.noreply.github.com> Co-authored-by: Jack <87339414+algojack@users.noreply.github.com> Co-authored-by: John Lee <john.lee@algorand.com> Co-authored-by: algobarb <78746954+algobarb@users.noreply.github.com> Co-authored-by: Zeph Grunschlag <tzaffi@users.noreply.github.com> Co-authored-by: Fabrice Benhamouda <fabrice.benhamouda@normalesup.org> Co-authored-by: Tsachi Herman <tsachi.herman@algorand.com> Co-authored-by: Tolik Zinovyev <tolik@algorand.com> Co-authored-by: egieseke <eric_gieseke@yahoo.com> Co-authored-by: Quentin Kniep <kniepque@hu-berlin.de> Co-authored-by: Hang Su <87964331+ahangsu@users.noreply.github.com> Co-authored-by: Or Aharonee <or.aharonee@algorand.com> Co-authored-by: Barbara Poon <barbara.poon@algorand.com> Co-authored-by: Brice Rising <60147418+bricerisingalgorand@users.noreply.github.com> Co-authored-by: Fionna <fionnacst@gmail.com> Co-authored-by: Nickolai Zeldovich <nickolai@csail.mit.edu> Co-authored-by: algoidurovic <91566643+algoidurovic@users.noreply.github.com>
-rw-r--r--THANKS.md1
-rw-r--r--cmd/goal/clerk.go32
-rw-r--r--cmd/opdoc/opdoc.go226
-rw-r--r--cmd/tealdbg/cdtSession_test.go4
-rw-r--r--cmd/tealdbg/cdtState.go8
-rw-r--r--cmd/tealdbg/debugger_test.go10
-rw-r--r--cmd/tealdbg/local.go149
-rw-r--r--cmd/tealdbg/localLedger_test.go18
-rw-r--r--cmd/tealdbg/local_test.go11
-rw-r--r--cmd/tealdbg/server.go2
-rw-r--r--cmd/tealdbg/server_test.go2
-rw-r--r--config/consensus.go4
-rw-r--r--daemon/algod/api/server/v2/dryrun.go19
-rw-r--r--daemon/algod/api/server/v2/utils.go11
-rw-r--r--data/transactions/logic/README.md763
-rw-r--r--data/transactions/logic/README_in.md299
-rw-r--r--data/transactions/logic/TEAL_opcodes.md1110
-rw-r--r--data/transactions/logic/assembler.go383
-rw-r--r--data/transactions/logic/assembler_test.go182
-rw-r--r--data/transactions/logic/backwardCompat_test.go161
-rw-r--r--data/transactions/logic/blackbox_test.go95
-rw-r--r--data/transactions/logic/debugger.go36
-rw-r--r--data/transactions/logic/debugger_test.go38
-rw-r--r--data/transactions/logic/doc.go379
-rw-r--r--data/transactions/logic/doc_test.go34
-rw-r--r--data/transactions/logic/eval.go1564
-rw-r--r--data/transactions/logic/evalAppTxn_test.go1413
-rw-r--r--data/transactions/logic/evalCrypto_test.go40
-rw-r--r--data/transactions/logic/evalStateful_test.go864
-rw-r--r--data/transactions/logic/eval_test.go1713
-rw-r--r--data/transactions/logic/export_test.go44
-rw-r--r--data/transactions/logic/fields.go526
-rw-r--r--data/transactions/logic/fields_string.go31
-rw-r--r--data/transactions/logic/fields_test.go113
-rw-r--r--data/transactions/logic/ledger_test.go (renamed from data/transactions/logictest/ledger.go)259
-rw-r--r--data/transactions/logic/opcodes.go28
-rw-r--r--data/transactions/signedtxn.go2
-rw-r--r--data/transactions/transaction.go18
-rw-r--r--data/transactions/transaction_test.go6
-rw-r--r--data/transactions/verify/txn.go15
-rw-r--r--data/transactions/verify/verifiedTxnCache.go2
-rw-r--r--ledger/acctupdates.go5
-rw-r--r--ledger/apply/application.go16
-rw-r--r--ledger/apply/application_test.go68
-rw-r--r--ledger/apply/apply.go2
-rw-r--r--ledger/apply/asset.go7
-rw-r--r--ledger/apply/keyreg_test.go2
-rw-r--r--ledger/apply/mockBalances_test.go2
-rw-r--r--ledger/apptxn_test.go1339
-rw-r--r--ledger/evalindexer.go11
-rw-r--r--ledger/evalindexer_test.go22
-rw-r--r--ledger/internal/appcow.go37
-rw-r--r--ledger/internal/applications.go120
-rw-r--r--ledger/internal/applications_test.go64
-rw-r--r--ledger/internal/apptxn_test.go2441
-rw-r--r--ledger/internal/assetcow.go2
-rw-r--r--ledger/internal/cow.go37
-rw-r--r--ledger/internal/eval.go92
-rw-r--r--ledger/internal/eval_blackbox_test.go36
-rw-r--r--ledger/internal/eval_test.go211
-rw-r--r--ledger/ledger_test.go4
-rw-r--r--ledger/testing/testGenesis.go1
-rwxr-xr-x[-rw-r--r--]test/e2e-go/cli/goal/expect/statefulTealCreateAppTest.exp2
-rwxr-xr-xtest/scripts/e2e_subs/app-inner-calls.py149
-rwxr-xr-xtest/scripts/e2e_subs/goal/goal.py21
65 files changed, 8940 insertions, 6366 deletions
diff --git a/THANKS.md b/THANKS.md
index f0eb65981..0585b9338 100644
--- a/THANKS.md
+++ b/THANKS.md
@@ -5,6 +5,7 @@ A big thank you to everyone who has contributed to the `go-algorand` codebase.
### External Contributors
- aybehrouz
+- fionnachan
- jeapostrophe
- jecassis
- jsign
diff --git a/cmd/goal/clerk.go b/cmd/goal/clerk.go
index 87d6ef27f..5359c372f 100644
--- a/cmd/goal/clerk.go
+++ b/cmd/goal/clerk.go
@@ -321,7 +321,7 @@ var sendCmd = &cobra.Command{
var err error
if progByteFile != "" {
if programSource != "" || logicSigFile != "" {
- reportErrorln("should at most one of --from-program/-F or --from-program-bytes/-P --logic-sig/-L")
+ reportErrorln("should use at most one of --from-program/-F or --from-program-bytes/-P --logic-sig/-L")
}
program, err = readFile(progByteFile)
if err != nil {
@@ -329,7 +329,7 @@ var sendCmd = &cobra.Command{
}
} else if programSource != "" {
if logicSigFile != "" {
- reportErrorln("should at most one of --from-program/-F or --from-program-bytes/-P --logic-sig/-L")
+ reportErrorln("should use at most one of --from-program/-F or --from-program-bytes/-P --logic-sig/-L")
}
program = assembleFile(programSource)
} else if logicSigFile != "" {
@@ -788,6 +788,9 @@ var signCmd = &cobra.Command{
for _, group := range groupsOrder {
txnGroup := []transactions.SignedTxn{}
for _, txn := range txnGroups[group] {
+ if lsig.Logic != nil {
+ txn.Lsig = lsig
+ }
txnGroup = append(txnGroup, *txn)
}
var groupCtx *verify.GroupContext
@@ -801,7 +804,6 @@ var signCmd = &cobra.Command{
for i, txn := range txnGroup {
var signedTxn transactions.SignedTxn
if lsig.Logic != nil {
- txn.Lsig = lsig
err = verify.LogicSigSanityCheck(&txn, i, groupCtx)
if err != nil {
reportErrorf("%s: txn[%d] error %s", txFilename, txnIndex[txnGroups[group][i]], err)
@@ -1065,10 +1067,7 @@ var dryrunCmd = &cobra.Command{
}
stxns = append(stxns, txn)
}
- txgroup := make([]transactions.SignedTxn, len(stxns))
- for i, st := range stxns {
- txgroup[i] = st
- }
+ txgroup := transactions.WrapSignedTxnsWithAD(stxns)
proto, params := getProto(protoVersion)
if dumpForDryrun {
// Write dryrun data to file
@@ -1078,7 +1077,7 @@ var dryrunCmd = &cobra.Command{
if err != nil {
reportErrorf(err.Error())
}
- data, err := libgoal.MakeDryrunStateBytes(client, nil, txgroup, accts, string(proto), dumpForDryrunFormat.String())
+ data, err := libgoal.MakeDryrunStateBytes(client, nil, stxns, accts, string(proto), dumpForDryrunFormat.String())
if err != nil {
reportErrorf(err.Error())
}
@@ -1096,22 +1095,15 @@ var dryrunCmd = &cobra.Command{
if uint64(txn.Lsig.Len()) > params.LogicSigMaxSize {
reportErrorf("program size too large: %d > %d", len(txn.Lsig.Logic), params.LogicSigMaxSize)
}
- ep := logic.EvalParams{Txn: &txn, Proto: &params, GroupIndex: uint64(i), TxnGroup: txgroup}
- err := logic.Check(txn.Lsig.Logic, ep)
+ ep := logic.NewEvalParams(txgroup, &params, nil)
+ err := logic.CheckSignature(i, ep)
if err != nil {
reportErrorf("program failed Check: %s", err)
}
- sb := strings.Builder{}
- ep = logic.EvalParams{
- Txn: &txn,
- GroupIndex: uint64(i),
- Proto: &params,
- Trace: &sb,
- TxnGroup: txgroup,
- }
- pass, err := logic.Eval(txn.Lsig.Logic, ep)
+ ep.Trace = &strings.Builder{}
+ pass, err := logic.EvalSignature(i, ep)
// TODO: optionally include `inspect` output here?
- fmt.Fprintf(os.Stdout, "tx[%d] trace:\n%s\n", i, sb.String())
+ fmt.Fprintf(os.Stdout, "tx[%d] trace:\n%s\n", i, ep.Trace.String())
if pass {
fmt.Fprintf(os.Stdout, " - pass -\n")
} else {
diff --git a/cmd/opdoc/opdoc.go b/cmd/opdoc/opdoc.go
index be9b69956..c98a1a0f7 100644
--- a/cmd/opdoc/opdoc.go
+++ b/cmd/opdoc/opdoc.go
@@ -29,8 +29,8 @@ import (
)
func opGroupMarkdownTable(names []string, out io.Writer) {
- fmt.Fprint(out, `| Op | Description |
-| --- | --- |
+ fmt.Fprint(out, `| Opcode | Description |
+| - | -- |
`)
opSpecs := logic.OpsByName[logic.LogicVersion]
// TODO: sort by logic.OpSpecs[].Opcode
@@ -58,69 +58,98 @@ func typeEnumTableMarkdown(out io.Writer) {
func integerConstantsTableMarkdown(out io.Writer) {
fmt.Fprintf(out, "#### OnComplete\n\n")
fmt.Fprintf(out, "%s\n\n", logic.OnCompletionPreamble)
- fmt.Fprintf(out, "| Value | Constant name | Description |\n")
- fmt.Fprintf(out, "| --- | --- | --- |\n")
+ fmt.Fprintf(out, "| Value | Name | Description |\n")
+ fmt.Fprintf(out, "| - | ---- | -------- |\n")
for i, name := range logic.OnCompletionNames {
value := uint64(i)
fmt.Fprintf(out, "| %d | %s | %s |\n", value, markdownTableEscape(name), logic.OnCompletionDescription(value))
}
fmt.Fprintf(out, "\n")
- fmt.Fprintf(out, "#### TypeEnum constants\n")
- fmt.Fprintf(out, "| Value | Constant name | Description |\n")
- fmt.Fprintf(out, "| --- | --- | --- |\n")
+ fmt.Fprintf(out, "#### TypeEnum constants\n\n")
+ fmt.Fprintf(out, "| Value | Name | Description |\n")
+ fmt.Fprintf(out, "| - | --- | ------ |\n")
for i, name := range logic.TxnTypeNames {
fmt.Fprintf(out, "| %d | %s | %s |\n", i, markdownTableEscape(name), logic.TypeNameDescriptions[name])
}
out.Write([]byte("\n"))
}
-func fieldTableMarkdown(out io.Writer, names []string, types []logic.StackType, extra map[string]string) {
- if types != nil {
- fmt.Fprintf(out, "| Index | Name | Type | Notes |\n")
- fmt.Fprintf(out, "| --- | --- | --- | --- |\n")
- } else {
- fmt.Fprintf(out, "| Index | Name | Notes |\n")
- fmt.Fprintf(out, "| --- | --- | --- |\n")
+type speccer interface {
+ SpecByName(name string) logic.FieldSpec
+}
+
+func fieldSpecsMarkdown(out io.Writer, names []string, specs speccer) {
+ showTypes := false
+ showVers := false
+ spec0 := specs.SpecByName(names[0])
+ opVer := spec0.OpVersion()
+ for _, name := range names {
+ if specs.SpecByName(name).Type() != logic.StackNone {
+ showTypes = true
+ }
+ if specs.SpecByName(name).Version() != opVer {
+ showVers = true
+ }
+ }
+ headers := "| Index | Name |"
+ widths := "| - | ------ |"
+ if showTypes {
+ headers += " Type |"
+ widths += " -- |"
+ }
+ if showVers {
+ headers += " In |"
+ widths += " - |"
}
+ headers += " Notes |\n"
+ widths += " --------- |\n"
+ fmt.Fprint(out, headers, widths)
for i, name := range names {
+ spec := specs.SpecByName(name)
str := fmt.Sprintf("| %d | %s", i, markdownTableEscape(name))
- if types != nil {
- gfType := types[i]
- str = fmt.Sprintf("%s | %s", str, markdownTableEscape(gfType.String()))
+ if showTypes {
+ str = fmt.Sprintf("%s | %s", str, markdownTableEscape(spec.Type().String()))
+ }
+ if showVers {
+ if spec.Version() == spec.OpVersion() {
+ str = fmt.Sprintf("%s | ", str)
+ } else {
+ str = fmt.Sprintf("%s | v%d ", str, spec.Version())
+ }
}
- fmt.Fprintf(out, "%s | %s |\n", str, extra[name])
+ fmt.Fprintf(out, "%s | %s |\n", str, spec.Note())
}
- out.Write([]byte("\n"))
+ fmt.Fprint(out, "\n")
}
func transactionFieldsMarkdown(out io.Writer) {
fmt.Fprintf(out, "\n`txn` Fields (see [transaction reference](https://developer.algorand.org/docs/reference/transactions/)):\n\n")
- fieldTableMarkdown(out, logic.TxnFieldNames, logic.TxnFieldTypes, logic.TxnFieldDocs())
+ fieldSpecsMarkdown(out, logic.TxnFieldNames, logic.TxnFieldSpecByName)
}
func globalFieldsMarkdown(out io.Writer) {
fmt.Fprintf(out, "\n`global` Fields:\n\n")
- fieldTableMarkdown(out, logic.GlobalFieldNames, logic.GlobalFieldTypes, logic.GlobalFieldDocs())
+ fieldSpecsMarkdown(out, logic.GlobalFieldNames, logic.GlobalFieldSpecByName)
}
func assetHoldingFieldsMarkdown(out io.Writer) {
fmt.Fprintf(out, "\n`asset_holding_get` Fields:\n\n")
- fieldTableMarkdown(out, logic.AssetHoldingFieldNames, logic.AssetHoldingFieldTypes, logic.AssetHoldingFieldDocs)
+ fieldSpecsMarkdown(out, logic.AssetHoldingFieldNames, logic.AssetHoldingFieldSpecByName)
}
func assetParamsFieldsMarkdown(out io.Writer) {
fmt.Fprintf(out, "\n`asset_params_get` Fields:\n\n")
- fieldTableMarkdown(out, logic.AssetParamsFieldNames, logic.AssetParamsFieldTypes, logic.AssetParamsFieldDocs())
+ fieldSpecsMarkdown(out, logic.AssetParamsFieldNames, logic.AssetParamsFieldSpecByName)
}
func appParamsFieldsMarkdown(out io.Writer) {
fmt.Fprintf(out, "\n`app_params_get` Fields:\n\n")
- fieldTableMarkdown(out, logic.AppParamsFieldNames, logic.AppParamsFieldTypes, logic.AppParamsFieldDocs())
+ fieldSpecsMarkdown(out, logic.AppParamsFieldNames, logic.AppParamsFieldSpecByName)
}
func ecDsaCurvesMarkdown(out io.Writer) {
fmt.Fprintf(out, "\n`ECDSA` Curves:\n\n")
- fieldTableMarkdown(out, logic.EcdsaCurveNames, nil, logic.EcdsaCurveDocs)
+ fieldSpecsMarkdown(out, logic.EcdsaCurveNames, logic.EcdsaCurveSpecByName)
}
func immediateMarkdown(op *logic.OpSpec) string {
@@ -131,38 +160,45 @@ func immediateMarkdown(op *logic.OpSpec) string {
return markdown
}
-func opToMarkdown(out io.Writer, op *logic.OpSpec) (err error) {
- ws := ""
- opextra := logic.OpImmediateNote(op.Name)
- if opextra != "" {
- ws = " "
+func stackMarkdown(op *logic.OpSpec) string {
+ out := "- Stack: "
+ special := logic.OpStackEffects(op.Name)
+ if special != "" {
+ return out + special + "\n"
}
- fmt.Fprintf(out, "\n## %s%s\n\n- Opcode: 0x%02x%s%s\n", op.Name, immediateMarkdown(op), op.Opcode, ws, opextra)
- if op.Args == nil {
- fmt.Fprintf(out, "- Pops: _None_\n")
- } else if len(op.Args) == 1 {
- fmt.Fprintf(out, "- Pops: *... stack*, %s\n", op.Args[0].String())
- } else {
- fmt.Fprintf(out, "- Pops: *... stack*, {%s A}", op.Args[0].String())
- for i, v := range op.Args[1:] {
- fmt.Fprintf(out, ", {%s %c}", v.String(), rune(int('B')+i))
+
+ out += "..."
+ for i, v := range op.Args {
+ out += fmt.Sprintf(", %c", rune(int('A')+i))
+ if v.Typed() {
+ out += fmt.Sprintf(": %s", v)
}
- out.Write([]byte("\n"))
}
+ out += " &rarr; ..."
- if op.Returns == nil {
- fmt.Fprintf(out, "- Pushes: _None_\n")
- } else {
- if len(op.Returns) == 1 {
- fmt.Fprintf(out, "- Pushes: %s", op.Returns[0].String())
- } else {
- fmt.Fprintf(out, "- Pushes: *... stack*, %s", op.Returns[0].String())
- for _, rt := range op.Returns[1:] {
- fmt.Fprintf(out, ", %s", rt.String())
+ for i, rt := range op.Returns {
+ out += ", "
+ if len(op.Returns) > 1 {
+ start := int('X')
+ if len(op.Returns) > 3 {
+ start = int('Z') + 1 - len(op.Returns)
}
+ out += fmt.Sprintf("%c: ", rune(start+i))
}
- fmt.Fprintf(out, "\n")
+ out += rt.String()
}
+ return out + "\n"
+}
+
+func opToMarkdown(out io.Writer, op *logic.OpSpec) (err error) {
+ ws := ""
+ opextra := logic.OpImmediateNote(op.Name)
+ if opextra != "" {
+ ws = " "
+ }
+ stackEffects := stackMarkdown(op)
+ fmt.Fprintf(out, "\n## %s%s\n\n- Opcode: 0x%02x%s%s\n%s",
+ op.Name, immediateMarkdown(op), op.Opcode, ws, opextra, stackEffects)
fmt.Fprintf(out, "- %s\n", logic.OpDoc(op.Name))
// if cost changed with versions print all of them
costs := logic.OpAllCosts(op.Name)
@@ -170,12 +206,12 @@ func opToMarkdown(out io.Writer, op *logic.OpSpec) (err error) {
fmt.Fprintf(out, "- **Cost**:\n")
for _, cost := range costs {
if cost.From == cost.To {
- fmt.Fprintf(out, " - %d (LogicSigVersion = %d)\n", cost.Cost, cost.To)
+ fmt.Fprintf(out, " - %d (v%d)\n", cost.Cost, cost.To)
} else {
if cost.To < logic.LogicVersion {
- fmt.Fprintf(out, " - %d (%d <= LogicSigVersion <= %d)\n", cost.Cost, cost.From, cost.To)
+ fmt.Fprintf(out, " - %d (v%d - v%d)\n", cost.Cost, cost.From, cost.To)
} else {
- fmt.Fprintf(out, " - %d (LogicSigVersion >= %d)\n", cost.Cost, cost.From)
+ fmt.Fprintf(out, " - %d (since v%d)\n", cost.Cost, cost.From)
}
}
}
@@ -186,7 +222,7 @@ func opToMarkdown(out io.Writer, op *logic.OpSpec) (err error) {
}
}
if op.Version > 1 {
- fmt.Fprintf(out, "- LogicSigVersion >= %d\n", op.Version)
+ fmt.Fprintf(out, "- Availability: v%d\n", op.Version)
}
if !op.Modes.Any() {
fmt.Fprintf(out, "- Mode: %s\n", op.Modes.String())
@@ -250,28 +286,6 @@ type LanguageSpec struct {
Ops []OpRecord
}
-func argEnum(name string) []string {
- if name == "txn" || name == "gtxn" || name == "gtxns" {
- return logic.TxnFieldNames
- }
- if name == "global" {
- return logic.GlobalFieldNames
- }
- if name == "txna" || name == "gtxna" || name == "gtxnsa" || name == "txnas" || name == "gtxnas" || name == "gtxnsas" {
- return logic.TxnaFieldNames
- }
- if name == "asset_holding_get" {
- return logic.AssetHoldingFieldNames
- }
- if name == "asset_params_get" {
- return logic.AssetParamsFieldNames
- }
- if name == "app_params_get" {
- return logic.AppParamsFieldNames
- }
- return nil
-}
-
func typeString(types []logic.StackType) string {
out := make([]byte, len(types))
for i, t := range types {
@@ -294,27 +308,32 @@ func typeString(types []logic.StackType) string {
return string(out)
}
-func argEnumTypes(name string) string {
- if name == "txn" || name == "gtxn" || name == "gtxns" {
- return typeString(logic.TxnFieldTypes)
- }
- if name == "global" {
- return typeString(logic.GlobalFieldTypes)
- }
- if name == "txna" || name == "gtxna" || name == "gtxnsa" || name == "txnas" || name == "gtxnas" || name == "gtxnsas" {
- return typeString(logic.TxnaFieldTypes)
- }
- if name == "asset_holding_get" {
- return typeString(logic.AssetHoldingFieldTypes)
- }
- if name == "asset_params_get" {
- return typeString(logic.AssetParamsFieldTypes)
- }
- if name == "app_params_get" {
- return typeString(logic.AppParamsFieldTypes)
+func fieldsAndTypes(names []string, specs speccer) ([]string, string) {
+ types := make([]logic.StackType, len(names))
+ for i, name := range names {
+ types[i] = specs.SpecByName(name).Type()
}
+ return names, typeString(types)
+}
- return ""
+func argEnums(name string) (names []string, types string) {
+ switch name {
+ case "txn", "gtxn", "gtxns", "itxn", "gitxn", "itxn_field":
+ return fieldsAndTypes(logic.TxnFieldNames, logic.TxnFieldSpecByName)
+ case "global":
+ return
+ case "txna", "gtxna", "gtxnsa", "txnas", "gtxnas", "gtxnsas", "itxna", "gitxna":
+ // Map is the whole txn field spec map. That's fine, we only lookup the given names.
+ return fieldsAndTypes(logic.TxnaFieldNames(), logic.TxnFieldSpecByName)
+ case "asset_holding_get":
+ return fieldsAndTypes(logic.AssetHoldingFieldNames, logic.AssetHoldingFieldSpecByName)
+ case "asset_params_get":
+ return fieldsAndTypes(logic.AssetParamsFieldNames, logic.AssetParamsFieldSpecByName)
+ case "app_params_get":
+ return fieldsAndTypes(logic.AppParamsFieldNames, logic.AppParamsFieldSpecByName)
+ default:
+ return nil, ""
+ }
}
func buildLanguageSpec(opGroups map[string][]string) *LanguageSpec {
@@ -327,8 +346,7 @@ func buildLanguageSpec(opGroups map[string][]string) *LanguageSpec {
records[i].Returns = typeString(spec.Returns)
records[i].Cost = spec.Details.Cost
records[i].Size = spec.Details.Size
- records[i].ArgEnum = argEnum(spec.Name)
- records[i].ArgEnumTypes = argEnumTypes(spec.Name)
+ records[i].ArgEnum, records[i].ArgEnumTypes = argEnums(spec.Name)
records[i].Doc = logic.OpDoc(spec.Name)
records[i].DocExtra = logic.OpDocExtra(spec.Name)
records[i].ImmediateNote = logic.OpImmediateNote(spec.Name)
@@ -361,25 +379,29 @@ func main() {
constants.Close()
txnfields, _ := os.Create("txn_fields.md")
- fieldTableMarkdown(txnfields, logic.TxnFieldNames, logic.TxnFieldTypes, logic.TxnFieldDocs())
+ fieldSpecsMarkdown(txnfields, logic.TxnFieldNames, logic.TxnFieldSpecByName)
txnfields.Close()
globalfields, _ := os.Create("global_fields.md")
- fieldTableMarkdown(globalfields, logic.GlobalFieldNames, logic.GlobalFieldTypes, logic.GlobalFieldDocs())
+ fieldSpecsMarkdown(globalfields, logic.GlobalFieldNames, logic.GlobalFieldSpecByName)
globalfields.Close()
assetholding, _ := os.Create("asset_holding_fields.md")
- fieldTableMarkdown(assetholding, logic.AssetHoldingFieldNames, logic.AssetHoldingFieldTypes, logic.AssetHoldingFieldDocs)
+ fieldSpecsMarkdown(assetholding, logic.AssetHoldingFieldNames, logic.AssetHoldingFieldSpecByName)
assetholding.Close()
assetparams, _ := os.Create("asset_params_fields.md")
- fieldTableMarkdown(assetparams, logic.AssetParamsFieldNames, logic.AssetParamsFieldTypes, logic.AssetParamsFieldDocs())
+ fieldSpecsMarkdown(assetparams, logic.AssetParamsFieldNames, logic.AssetParamsFieldSpecByName)
assetparams.Close()
appparams, _ := os.Create("app_params_fields.md")
- fieldTableMarkdown(appparams, logic.AppParamsFieldNames, logic.AppParamsFieldTypes, logic.AppParamsFieldDocs())
+ fieldSpecsMarkdown(appparams, logic.AppParamsFieldNames, logic.AppParamsFieldSpecByName)
appparams.Close()
+ acctparams, _ := os.Create("acct_params_fields.md")
+ fieldSpecsMarkdown(acctparams, logic.AcctParamsFieldNames, logic.AcctParamsFieldSpecByName)
+ acctparams.Close()
+
langspecjs, _ := os.Create("langspec.json")
enc := json.NewEncoder(langspecjs)
enc.Encode(buildLanguageSpec(opGroups))
diff --git a/cmd/tealdbg/cdtSession_test.go b/cmd/tealdbg/cdtSession_test.go
index 7117f97a8..864fa9937 100644
--- a/cmd/tealdbg/cdtSession_test.go
+++ b/cmd/tealdbg/cdtSession_test.go
@@ -479,7 +479,7 @@ func TestCdtSessionGetObjects(t *testing.T) {
state := cdtState{
disassembly: "version 2\nint 1",
proto: &proto,
- txnGroup: []transactions.SignedTxn{
+ txnGroup: transactions.WrapSignedTxnsWithAD([]transactions.SignedTxn{
{
Txn: transactions.Transaction{
Type: protocol.PaymentTx,
@@ -496,7 +496,7 @@ func TestCdtSessionGetObjects(t *testing.T) {
},
},
},
- },
+ }),
groupIndex: 0,
globals: globals,
stack: []basics.TealValue{{Type: basics.TealBytesType, Bytes: "test"}},
diff --git a/cmd/tealdbg/cdtState.go b/cmd/tealdbg/cdtState.go
index 972f87938..5e7ee9825 100644
--- a/cmd/tealdbg/cdtState.go
+++ b/cmd/tealdbg/cdtState.go
@@ -37,7 +37,7 @@ type cdtState struct {
// immutable content
disassembly string
proto *config.ConsensusParams
- txnGroup []transactions.SignedTxn
+ txnGroup []transactions.SignedTxnWithAD
groupIndex int
globals []basics.TealValue
@@ -90,7 +90,7 @@ var txnFileTypeHints = map[logic.TxnField]typeHint{
logic.FreezeAssetAccount: addressHint,
}
-func (s *cdtState) Init(disassembly string, proto *config.ConsensusParams, txnGroup []transactions.SignedTxn, groupIndex int, globals []basics.TealValue) {
+func (s *cdtState) Init(disassembly string, proto *config.ConsensusParams, txnGroup []transactions.SignedTxnWithAD, groupIndex int, globals []basics.TealValue) {
s.disassembly = disassembly
s.proto = proto
s.txnGroup = txnGroup
@@ -460,7 +460,7 @@ func makeIntPreview(n int) (prop []cdt.RuntimePropertyPreview) {
return
}
-func makeTxnPreview(txnGroup []transactions.SignedTxn, groupIndex int) cdt.RuntimeObjectPreview {
+func makeTxnPreview(txnGroup []transactions.SignedTxnWithAD, groupIndex int) cdt.RuntimeObjectPreview {
var prop []cdt.RuntimePropertyPreview
if len(txnGroup) > 0 {
fields := prepareTxn(&txnGroup[groupIndex].Txn, groupIndex)
@@ -471,7 +471,7 @@ func makeTxnPreview(txnGroup []transactions.SignedTxn, groupIndex int) cdt.Runti
return p
}
-func makeGtxnPreview(txnGroup []transactions.SignedTxn) cdt.RuntimeObjectPreview {
+func makeGtxnPreview(txnGroup []transactions.SignedTxnWithAD) cdt.RuntimeObjectPreview {
prop := makeIntPreview(len(txnGroup))
p := cdt.RuntimeObjectPreview{
Type: "object",
diff --git a/cmd/tealdbg/debugger_test.go b/cmd/tealdbg/debugger_test.go
index f59236e99..b37f7fb35 100644
--- a/cmd/tealdbg/debugger_test.go
+++ b/cmd/tealdbg/debugger_test.go
@@ -100,11 +100,8 @@ func TestDebuggerSimple(t *testing.T) {
da := makeTestDbgAdapter(t)
debugger.AddAdapter(da)
- ep := logic.EvalParams{
- Proto: &proto,
- Debugger: debugger,
- Txn: &transactions.SignedTxn{},
- }
+ ep := logic.NewEvalParams(make([]transactions.SignedTxnWithAD, 1), &proto, nil)
+ ep.Debugger = debugger
source := `int 0
int 1
@@ -112,8 +109,9 @@ int 1
`
ops, err := logic.AssembleStringWithVersion(source, 1)
require.NoError(t, err)
+ ep.TxnGroup[0].Lsig.Logic = ops.Program
- _, err = logic.Eval(ops.Program, ep)
+ _, err = logic.EvalSignature(0, ep)
require.NoError(t, err)
da.WaitForCompletion()
diff --git a/cmd/tealdbg/local.go b/cmd/tealdbg/local.go
index 2f0a1b4f5..b3e74cc5a 100644
--- a/cmd/tealdbg/local.go
+++ b/cmd/tealdbg/local.go
@@ -205,25 +205,25 @@ const (
// evaluation is a description of a single debugger run
type evaluation struct {
- program []byte
- source string
- offsetToLine map[int]int
- name string
- groupIndex uint64
- pastSideEffects []logic.EvalSideEffects
- mode modeType
- aidx basics.AppIndex
- ba apply.Balances
- result evalResult
- states AppState
+ program []byte
+ source string
+ offsetToLine map[int]int
+ name string
+ groupIndex uint64
+ mode modeType
+ aidx basics.AppIndex
+ ba apply.Balances
+ result evalResult
+ states AppState
}
-func (e *evaluation) eval(ep logic.EvalParams) (pass bool, err error) {
+func (e *evaluation) eval(gi int, ep *logic.EvalParams) (pass bool, err error) {
if e.mode == modeStateful {
- pass, _, err = e.ba.StatefulEval(ep, e.aidx, e.program)
+ pass, _, err = e.ba.StatefulEval(gi, ep, e.aidx, e.program)
return
}
- return logic.Eval(e.program, ep)
+ ep.TxnGroup[gi].Lsig.Logic = e.program
+ return logic.EvalSignature(gi, ep)
}
// LocalRunner runs local eval
@@ -340,17 +340,6 @@ func (r *LocalRunner) Setup(dp *DebugParams) (err error) {
dp.LatestTimestamp = int64(ddr.LatestTimestamp)
}
- if dp.PastSideEffects == nil {
- dp.PastSideEffects = logic.MakePastSideEffects(len(r.txnGroup))
- } else if len(dp.PastSideEffects) != len(r.txnGroup) {
- err = fmt.Errorf(
- "invalid past side effects slice with length %d should match group length of %d txns",
- len(dp.PastSideEffects),
- len(r.txnGroup),
- )
- return
- }
-
// if program(s) specified then run from it
if len(dp.ProgramBlobs) > 0 {
if len(r.txnGroup) == 1 && dp.GroupIndex != 0 {
@@ -388,7 +377,6 @@ func (r *LocalRunner) Setup(dp *DebugParams) (err error) {
}
}
r.runs[i].groupIndex = uint64(dp.GroupIndex)
- r.runs[i].pastSideEffects = dp.PastSideEffects
r.runs[i].name = dp.ProgramNames[i]
var mode modeType
@@ -451,13 +439,12 @@ func (r *LocalRunner) Setup(dp *DebugParams) (err error) {
return
}
run := evaluation{
- program: stxn.Txn.ApprovalProgram,
- groupIndex: uint64(gi),
- pastSideEffects: dp.PastSideEffects,
- mode: modeStateful,
- aidx: appIdx,
- ba: b,
- states: states,
+ program: stxn.Txn.ApprovalProgram,
+ groupIndex: uint64(gi),
+ mode: modeStateful,
+ aidx: appIdx,
+ ba: b,
+ states: states,
}
r.runs = append(r.runs, run)
}
@@ -487,13 +474,12 @@ func (r *LocalRunner) Setup(dp *DebugParams) (err error) {
return
}
run := evaluation{
- program: program,
- groupIndex: uint64(gi),
- pastSideEffects: dp.PastSideEffects,
- mode: modeStateful,
- aidx: appIdx,
- ba: b,
- states: states,
+ program: program,
+ groupIndex: uint64(gi),
+ mode: modeStateful,
+ aidx: appIdx,
+ ba: b,
+ states: states,
}
r.runs = append(r.runs, run)
found = true
@@ -522,47 +508,27 @@ func (r *LocalRunner) RunAll() error {
return fmt.Errorf("no program to debug")
}
+ txngroup := transactions.WrapSignedTxnsWithAD(r.txnGroup)
failed := 0
start := time.Now()
- pooledApplicationBudget := uint64(0)
- credit, _ := transactions.FeeCredit(r.txnGroup, r.proto.MinTxnFee)
- // ignore error since fees are not important for debugging in most cases
-
- evalParams := make([]logic.EvalParams, len(r.runs))
- for i, run := range r.runs {
- if run.mode == modeStateful {
- if r.proto.EnableAppCostPooling {
- pooledApplicationBudget += uint64(r.proto.MaxAppProgramCost)
- } else {
- pooledApplicationBudget = uint64(r.proto.MaxAppProgramCost)
- }
- }
- ep := logic.EvalParams{
- Proto: &r.proto,
- Debugger: r.debugger,
- Txn: &r.txnGroup[run.groupIndex],
- TxnGroup: r.txnGroup,
- GroupIndex: run.groupIndex,
- PastSideEffects: run.pastSideEffects,
- Specials: &transactions.SpecialAddresses{},
- FeeCredit: &credit,
- PooledApplicationBudget: &pooledApplicationBudget,
- }
- evalParams[i] = ep
- }
+ ep := logic.NewEvalParams(txngroup, &r.proto, &transactions.SpecialAddresses{})
+ ep.Debugger = r.debugger
+
+ var last error
for i := range r.runs {
run := &r.runs[i]
r.debugger.SaveProgram(run.name, run.program, run.source, run.offsetToLine, run.states)
- run.result.pass, run.result.err = run.eval(evalParams[i])
+ run.result.pass, run.result.err = run.eval(int(run.groupIndex), ep)
if run.result.err != nil {
failed++
+ last = run.result.err
}
}
elapsed := time.Since(start)
if failed == len(r.runs) && elapsed < time.Second {
- return fmt.Errorf("all %d program(s) failed in less than a second, invocation error?", failed)
+ return fmt.Errorf("all %d program(s) failed in less than a second, invocation error? %w", failed, last)
}
return nil
}
@@ -573,44 +539,19 @@ func (r *LocalRunner) Run() (bool, error) {
return false, fmt.Errorf("no program to debug")
}
- pooledApplicationBudget := uint64(0)
- credit, _ := transactions.FeeCredit(r.txnGroup, r.proto.MinTxnFee)
- // ignore error since fees are not important for debugging in most cases
+ txngroup := transactions.WrapSignedTxnsWithAD(r.txnGroup)
- evalParams := make([]logic.EvalParams, len(r.runs))
- for i, run := range r.runs {
- if run.mode == modeStateful {
- if r.proto.EnableAppCostPooling {
- pooledApplicationBudget += uint64(r.proto.MaxAppProgramCost)
- } else {
- pooledApplicationBudget = uint64(r.proto.MaxAppProgramCost)
- }
- }
- ep := logic.EvalParams{
- Proto: &r.proto,
- Txn: &r.txnGroup[run.groupIndex],
- TxnGroup: r.txnGroup,
- GroupIndex: run.groupIndex,
- PastSideEffects: run.pastSideEffects,
- Specials: &transactions.SpecialAddresses{},
- FeeCredit: &credit,
- PooledApplicationBudget: &pooledApplicationBudget,
- }
-
- // Workaround for Go's nil/empty interfaces nil check after nil assignment, i.e.
- // r.debugger = nil
- // ep.Debugger = r.debugger
- // if ep.Debugger != nil // FALSE
- if r.debugger != nil {
- r.debugger.SaveProgram(run.name, run.program, run.source, run.offsetToLine, run.states)
- ep.Debugger = r.debugger
- }
-
- evalParams[i] = ep
- }
+ ep := logic.NewEvalParams(txngroup, &r.proto, &transactions.SpecialAddresses{})
run := r.runs[0]
- ep := evalParams[0]
+ // Workaround for Go's nil/empty interfaces nil check after nil assignment, i.e.
+ // r.debugger = nil
+ // ep.Debugger = r.debugger
+ // if ep.Debugger != nil // FALSE
+ if r.debugger != nil {
+ r.debugger.SaveProgram(run.name, run.program, run.source, run.offsetToLine, run.states)
+ ep.Debugger = r.debugger
+ }
- return run.eval(ep)
+ return run.eval(int(run.groupIndex), ep)
}
diff --git a/cmd/tealdbg/localLedger_test.go b/cmd/tealdbg/localLedger_test.go
index 496f1407d..3dad5508c 100644
--- a/cmd/tealdbg/localLedger_test.go
+++ b/cmd/tealdbg/localLedger_test.go
@@ -90,6 +90,7 @@ int 2
// make transaction group: app call + sample payment
txn := transactions.SignedTxn{
Txn: transactions.Transaction{
+ Type: protocol.ApplicationCallTx,
Header: transactions.Header{
Sender: addr,
Fee: basics.MicroAlgos{Raw: 100},
@@ -109,22 +110,15 @@ int 2
a.NoError(err)
proto := config.Consensus[protocol.ConsensusCurrentVersion]
- pse := logic.MakePastSideEffects(1)
- ep := logic.EvalParams{
- Txn: &txn,
- Proto: &proto,
- TxnGroup: []transactions.SignedTxn{txn},
- GroupIndex: 0,
- PastSideEffects: pse,
- }
- pass, delta, err := ba.StatefulEval(ep, appIdx, program)
+ ep := logic.NewEvalParams([]transactions.SignedTxnWithAD{{SignedTxn: txn}}, &proto, &transactions.SpecialAddresses{})
+ pass, delta, err := ba.StatefulEval(0, ep, appIdx, program)
a.NoError(err)
a.True(pass)
- a.Equal(1, len(delta.GlobalDelta))
+ a.Len(delta.GlobalDelta, 1)
a.Equal(basics.SetUintAction, delta.GlobalDelta["gkeyint"].Action)
a.Equal(uint64(3), delta.GlobalDelta["gkeyint"].Uint)
- a.Equal(1, len(delta.LocalDeltas))
- a.Equal(1, len(delta.LocalDeltas[0]))
+ a.Len(delta.LocalDeltas, 1)
+ a.Len(delta.LocalDeltas[0], 1)
a.Equal(basics.SetUintAction, delta.LocalDeltas[0]["lkeyint"].Action)
a.Equal(uint64(2), delta.LocalDeltas[0]["lkeyint"].Uint)
}
diff --git a/cmd/tealdbg/local_test.go b/cmd/tealdbg/local_test.go
index 14b16f43a..84328dbbb 100644
--- a/cmd/tealdbg/local_test.go
+++ b/cmd/tealdbg/local_test.go
@@ -317,6 +317,7 @@ func TestDebugEnvironment(t *testing.T) {
// make transaction group: app call + sample payment
txn := transactions.SignedTxn{
Txn: transactions.Transaction{
+ Type: protocol.ApplicationCallTx,
Header: transactions.Header{
Sender: sender,
Fee: basics.MicroAlgos{Raw: 1000},
@@ -524,7 +525,7 @@ func TestDebugFromPrograms(t *testing.T) {
partitiontest.PartitionTest(t)
a := require.New(t)
- txnBlob := []byte("[" + strings.Join([]string{string(txnSample), txnSample}, ",") + "]")
+ txnBlob := []byte("[" + strings.Join([]string{txnSample, txnSample}, ",") + "]")
l := LocalRunner{}
dp := DebugParams{
@@ -603,7 +604,7 @@ func TestRunMode(t *testing.T) {
partitiontest.PartitionTest(t)
a := require.New(t)
- txnBlob := []byte("[" + strings.Join([]string{string(txnSample), txnSample}, ",") + "]")
+ txnBlob := []byte("[" + strings.Join([]string{txnSample, txnSample}, ",") + "]")
l := LocalRunner{}
// check run mode auto on stateful code
@@ -625,7 +626,7 @@ func TestRunMode(t *testing.T) {
a.Equal(modeStateful, l.runs[0].mode)
a.Equal(basics.AppIndex(100), l.runs[0].aidx)
a.NotEqual(
- reflect.ValueOf(logic.Eval).Pointer(),
+ reflect.ValueOf(logic.EvalSignature).Pointer(),
reflect.ValueOf(l.runs[0].eval).Pointer(),
)
@@ -960,7 +961,7 @@ func TestLocalBalanceAdapter(t *testing.T) {
a.Equal(modeStateful, l.runs[0].mode)
a.NotEmpty(l.runs[0].aidx)
a.NotEqual(
- reflect.ValueOf(logic.Eval).Pointer(),
+ reflect.ValueOf(logic.EvalSignature).Pointer(),
reflect.ValueOf(l.runs[0].eval).Pointer(),
)
ba := l.runs[0].ba
@@ -1051,7 +1052,7 @@ func TestLocalBalanceAdapterIndexer(t *testing.T) {
a.Equal(modeStateful, l.runs[0].mode)
a.NotEmpty(l.runs[0].aidx)
a.NotEqual(
- reflect.ValueOf(logic.Eval).Pointer(),
+ reflect.ValueOf(logic.EvalSignature).Pointer(),
reflect.ValueOf(l.runs[0].eval).Pointer(),
)
diff --git a/cmd/tealdbg/server.go b/cmd/tealdbg/server.go
index b6ccb0524..cf062af62 100644
--- a/cmd/tealdbg/server.go
+++ b/cmd/tealdbg/server.go
@@ -25,7 +25,6 @@ import (
"strings"
"time"
- "github.com/algorand/go-algorand/data/transactions/logic"
"github.com/algorand/websocket"
"github.com/gorilla/mux"
)
@@ -86,7 +85,6 @@ type DebugParams struct {
Proto string
TxnBlob []byte
GroupIndex int
- PastSideEffects []logic.EvalSideEffects
BalanceBlob []byte
DdrBlob []byte
IndexerURL string
diff --git a/cmd/tealdbg/server_test.go b/cmd/tealdbg/server_test.go
index fa06da65a..a91be7181 100644
--- a/cmd/tealdbg/server_test.go
+++ b/cmd/tealdbg/server_test.go
@@ -131,7 +131,7 @@ func TestServerRemote(t *testing.T) {
func TestServerLocal(t *testing.T) {
partitiontest.PartitionTest(t)
- txnBlob := []byte("[" + strings.Join([]string{string(txnSample), txnSample}, ",") + "]")
+ txnBlob := []byte("[" + strings.Join([]string{txnSample, txnSample}, ",") + "]")
dp := DebugParams{
ProgramNames: []string{"test"},
ProgramBlobs: [][]byte{{2, 0x20, 1, 1, 0x22}}, // version, intcb, int 1
diff --git a/config/consensus.go b/config/consensus.go
index 1f553d20e..2a29ed823 100644
--- a/config/consensus.go
+++ b/config/consensus.go
@@ -285,6 +285,9 @@ type ConsensusParams struct {
// maximum number of inner transactions that can be created by an app call
MaxInnerTransactions int
+ // should inner transaction limit be pooled across app calls?
+ EnableInnerTransactionPooling bool
+
// maximum number of applications a single account can create and store
// AppParams for at once
MaxAppsCreated int
@@ -1051,6 +1054,7 @@ func initConsensusProtocols() {
// Enable TEAL 6 / AVM 1.1
vFuture.LogicSigVersion = 6
+ vFuture.EnableInnerTransactionPooling = true
vFuture.MaxProposedExpiredOnlineAccounts = 32
diff --git a/daemon/algod/api/server/v2/dryrun.go b/daemon/algod/api/server/v2/dryrun.go
index 19a0f92d6..ef7ed19e2 100644
--- a/daemon/algod/api/server/v2/dryrun.go
+++ b/daemon/algod/api/server/v2/dryrun.go
@@ -366,6 +366,10 @@ func doDryrunRequest(dr *DryrunRequest, response *generated.DryrunResponse) {
return
}
proto := config.Consensus[protocol.ConsensusVersion(dr.ProtocolVersion)]
+ txgroup := transactions.WrapSignedTxnsWithAD(dr.Txns)
+ specials := transactions.SpecialAddresses{}
+ ep := logic.NewEvalParams(txgroup, &proto, &specials)
+
origEnableAppCostPooling := proto.EnableAppCostPooling
// Enable EnableAppCostPooling so that dryrun
// 1) can determine cost 2) reports actual cost for large programs that fail
@@ -381,24 +385,15 @@ func doDryrunRequest(dr *DryrunRequest, response *generated.DryrunResponse) {
allowedBudget += uint64(proto.MaxAppProgramCost)
}
}
+ ep.PooledApplicationBudget = &pooledAppBudget
response.Txns = make([]generated.DryrunTxnResult, len(dr.Txns))
for ti, stxn := range dr.Txns {
- pse := logic.MakePastSideEffects(len(dr.Txns))
- ep := logic.EvalParams{
- Txn: &stxn,
- Proto: &proto,
- TxnGroup: dr.Txns,
- GroupIndex: uint64(ti),
- PastSideEffects: pse,
- PooledApplicationBudget: &pooledAppBudget,
- Specials: &transactions.SpecialAddresses{},
- }
var result generated.DryrunTxnResult
if len(stxn.Lsig.Logic) > 0 {
var debug dryrunDebugReceiver
ep.Debugger = &debug
- pass, err := logic.Eval(stxn.Lsig.Logic, ep)
+ pass, err := logic.EvalSignature(ti, ep)
var messages []string
result.Disassembly = debug.lines
result.LogicSigTrace = &debug.history
@@ -489,7 +484,7 @@ func doDryrunRequest(dr *DryrunRequest, response *generated.DryrunResponse) {
program = app.ApprovalProgram
messages[0] = "ApprovalProgram"
}
- pass, delta, err := ba.StatefulEval(ep, appIdx, program)
+ pass, delta, err := ba.StatefulEval(ti, ep, appIdx, program)
result.Disassembly = debug.lines
result.AppCallTrace = &debug.history
result.GlobalDelta = StateDeltaToStateDelta(delta.GlobalDelta)
diff --git a/daemon/algod/api/server/v2/utils.go b/daemon/algod/api/server/v2/utils.go
index 9a9498ba3..d56e567ed 100644
--- a/daemon/algod/api/server/v2/utils.go
+++ b/daemon/algod/api/server/v2/utils.go
@@ -324,10 +324,13 @@ func convertInnerTxn(txn *transactions.SignedTxnWithAD) preEncodedTxInfo {
response.AssetIndex = numOrNil(uint64(txn.ApplyData.ConfigAsset))
response.ApplicationIndex = numOrNil(uint64(txn.ApplyData.ApplicationID))
- // Deltas, Logs, and Inners can not be set until we allow appl
- // response.LocalStateDelta, response.GlobalStateDelta = convertToDeltas(txn)
- // response.Logs = convertLogs(txn)
- // response.Inners = convertInners(&txn)
+ withStatus := node.TxnWithStatus{
+ Txn: txn.SignedTxn,
+ ApplyData: txn.ApplyData,
+ }
+ response.LocalStateDelta, response.GlobalStateDelta = convertToDeltas(withStatus)
+ response.Logs = convertLogs(withStatus)
+ response.Inners = convertInners(&withStatus)
return response
}
diff --git a/data/transactions/logic/README.md b/data/transactions/logic/README.md
index 49ca82780..f81173d39 100644
--- a/data/transactions/logic/README.md
+++ b/data/transactions/logic/README.md
@@ -1,59 +1,144 @@
-# Transaction Execution Approval Language (TEAL)
-
-TEAL is a bytecode based stack language that executes inside Algorand transactions. TEAL programs can be used to check the parameters of the transaction and approve the transaction as if by a signature. This use of TEAL is called a _LogicSig_. Starting with v2, TEAL programs may
-also execute as _Applications_ which are invoked with explicit application call transactions. Programs have read-only access to the transaction they are attached to, transactions in their atomic transaction group, and a few global values. In addition, _Application_ programs have access to limited state that is global to the application and per-account local state for each account that has opted-in to the application. For both types of program, approval is signaled by finishing with the stack containing a single non-zero uint64 value.
+# The Algorand Virtual Machine (AVM) and TEAL.
+
+The AVM is a bytecode based stack interpreter that executes programs
+asscoiated with Algorand transactions. TEAL is an assembly language
+syntax for specifying a program that is ultimately converted to AVM
+bytecode. These programs can be used to check the parameters of the
+transaction and approve the transaction as if by a signature. This use
+is called a _Smart Signature_. Starting with v2, these programs may
+also execute as _Smart Contracts_, which are often called
+_Applications_. Contract executions are invoked with explicit
+application call transactions.
+
+Programs have read-only access to the transaction they are attached
+to, the other transactions in their atomic transaction group, and a
+few global values. In addition, _Smart Contracts_ have access to
+limited state that is global to the application and per-account local
+state for each account that has opted-in to the application. For both
+types of program, approval is signaled by finishing with the stack
+containing a single non-zero uint64 value.
## The Stack
-The stack starts empty and contains values of either uint64 or bytes
-(`bytes` are implemented in Go as a []byte slice and may not exceed
+The stack starts empty and can contain values of either uint64 or byte-arrays
+(byte-arrays may not exceed
4096 bytes in length). Most operations act on the stack, popping
-arguments from it and pushing results to it.
+arguments from it and pushing results to it. Some operations have
+_immediate_ arguments that are encoded directly into the instruction,
+rather than coming from the stack.
-The maximum stack depth is currently 1000. If the stack depth is
-exceed or if a `bytes` element exceed 4096 bytes, the program fails.
+The maximum stack depth is 1000. If the stack depth is
+exceeded or if a byte-array element exceed 4096 bytes, the program fails.
## Scratch Space
-In addition to the stack there are 256 positions of scratch space,
-also uint64-bytes union values, each initialized as uint64
-zero. Scratch space is acccesed by the `load(s)` and `store(s)` ops
-moving data from or to scratch space, respectively.
+In addition to the stack there are 256 positions of scratch
+space. Like stack values, scratch locations may be uint64s or
+byte-arrays. Scratch locations are intialized as uint64 zero. Scratch
+space is acccsed by the `load(s)` and `store(s)` opcodes which move
+data from or to scratch space, respectively.
+
+## Versions
+
+In order to maintain existing semantics for previously written
+programs, AVM code is versioned. When new opcodes are introduced, or
+behavior is changed, a new version is introduced. Programs carrying
+old versions are executed with their original semantics. In the AVM
+bytecode, the version is an incrementing integer, currently 6, and
+denoted vX throughout this document. User friendly version numbers
+that correspond to programmer expectations, such as `AVM 1.0` map to
+these integers. AVM 0.9 is v4. AVM 1.0 is v5. AVM 1.1 is v6.
## Execution Modes
-Starting from version 2 TEAL evaluator can run programs in two modes:
-1. LogicSig (stateless)
-2. Application run (stateful)
+Starting from v2, the AVM can run programs in two modes:
+1. LogicSig or _stateless_ mode, used to execute Smart Signatures
+2. Application or _stateful_ mode, used to execute Smart Contracts
Differences between modes include:
1. Max program length (consensus parameters LogicSigMaxSize, MaxAppTotalProgramLen & MaxExtraAppProgramPages)
2. Max program cost (consensus parameters LogicSigMaxCost, MaxAppProgramCost)
-3. Opcode availability. For example, all stateful operations are only available in stateful mode. Refer to [opcodes document](TEAL_opcodes.md) for details.
+3. Opcode availability. Refer to [opcodes document](TEAL_opcodes.md) for details.
+4. Some global values, such as LatestTimestamp, are only available in stateful mode.
+5. Only Applications can observe transaction effects, such as Logs or IDs allocated to ASAs or new Applications.
+
+## Execution Environment for Smart Signatures
+
+Smart Signatures execute as part of testing a proposed transaction to
+see if it is valid and authorized to be committed into a block. If an
+authorized program executes and finishes with a single non-zero uint64
+value on the stack then that program has validated the transaction it
+is attached to.
+
+The program has access to data from the transaction it is attached to
+(`txn` op), any transactions in a transaction group it is part of
+(`gtxn` op), and a few global values like consensus parameters
+(`global` op). Some "Args" may be attached to a transaction being
+validated by a program. Args are an array of byte strings. A common
+pattern would be to have the key to unlock some contract as an Arg. Be
+aware that Smart Signature Args are recorded on the blockchain and
+publicly visible when the transaction is submitted to the network,
+even before the transaction has been included in a block. These Args
+are _not_ part of the transaction ID nor of the TxGroup hash. They
+also cannot be read from other programs in the group of transactions.
-## Execution Environment for LogicSigs
+A program can either authorize some delegated action on a normal private key signed or multisig account or be wholly in charge of a contract account.
-TEAL LogicSigs run in Algorand nodes as part of testing a proposed transaction to see if it is valid and authorized to be committed into a block.
+* If the account has signed the program (an ed25519 signature on "Program" concatenated with the program bytecode) then if the program returns true the transaction is authorized as if the account had signed it. This allows an account to hand out a signed program so that other users can carry out delegated actions which are approved by the program. Note that Smart Signature Args are _not_ signed.
-If an authorized program executes and finishes with a single non-zero uint64 value on the stack then that program has validated the transaction it is attached to.
+* If the SHA512_256 hash of the program (prefixed by "Program") is equal to the transaction Sender address then this is a contract account wholly controlled by the program. No other signature is necessary or possible. The only way to execute a transaction against the contract account is for the program to approve it.
-The TEAL program has access to data from the transaction it is attached to (`txn` op), any transactions in a transaction group it is part of (`gtxn` op), and a few global values like consensus parameters (`global` op). Some "Args" may be attached to a transaction being validated by a TEAL program. Args are an array of byte strings. A common pattern would be to have the key to unlock some contract as an Arg. Args are recorded on the blockchain and publicly visible when the transaction is submitted to the network. These LogicSig Args are _not_ part of the transaction ID nor of the TxGroup hash. They also cannot be read from other TEAL programs in the group of transactions.
+The bytecode plus the length of all Args must add up to no more than 1000 bytes (consensus parameter LogicSigMaxSize). Each opcode has an associated cost and the program cost must total no more than 20,000 (consensus parameter LogicSigMaxCost). Most opcodes have a cost of 1, but a few slow cryptographic operations are much higher. Prior to v4, the program's cost was estimated as the static sum of all the opcode costs in the program (whether they were actually executed or not). Beginning with v4, the program's cost is tracked dynamically, while being evaluated. If the program exceeds its budget, it fails.
-A program can either authorize some delegated action on a normal private key signed or multisig account or be wholly in charge of a contract account.
+## Execution Environment for Smart Contracts (Applications)
-* If the account has signed the program (an ed25519 signature on "Program" concatenated with the program bytes) then if the program returns true the transaction is authorized as if the account had signed it. This allows an account to hand out a signed program so that other users can carry out delegated actions which are approved by the program. Note that LogicSig Args are _not_ signed.
+Smart Contracts are executed in ApplicationCall transactions. Like
+Smart Signatures, contracts indicate success by leaving a single
+non-zero integer on the stack. A failed smart contract call is not a
+valid transaction, thus not written to the blockchain. Nodes maintain
+a list of transactions that would succeed, given the current state of
+the blockchain, called the transaction pool. Nodes draw from the pool
+if they are called upon to propose a block.
-* If the SHA512_256 hash of the program (prefixed by "Program") is equal to the transaction Sender address then this is a contract account wholly controlled by the program. No other signature is necessary or possible. The only way to execute a transaction against the contract account is for the program to approve it.
+Smart Contracts have access to everything a Smart Signature may access
+(see previous section), as well as the ability to examine blockchain
+state such as balances and contract state (their own state and the
+state of other contracts). They also have access to some global
+values that are not visible to Smart Signatures because the values
+change over time. Since smart contracts access changing state, nodes
+must rerun their code to determine if the ApplicationCall transactions
+in their pool would still succeed each time a block is added to the
+blockchain.
-The TEAL bytecode plus the length of all Args must add up to no more than 1000 bytes (consensus parameter LogicSigMaxSize). Each TEAL op has an associated cost and the program cost must total no more than 20000 (consensus parameter LogicSigMaxCost). Most ops have a cost of 1, but a few slow crypto ops are much higher. Prior to v4, the program's cost was estimated as the static sum of all the opcode costs in the program (whether they were actually executed or not). Beginning with v4, the program's cost is tracked dynamically, while being evaluated. If the program exceeds its budget, it fails.
+### Resource availability
+
+Smart contracts have limits on their execution budget (700, consensus
+paramter MaxAppProgramCost), and the amount of blockchain state they
+may examine. Opcodes may only access blockchain resources such as
+Accounts, Assets, and contract state if the given resource is
+_available_.
+
+ * A resource in the "foreign array" fields of the ApplicationCall
+ transaction (`txn.Accounts`, `txn.ForeignAssets`, and
+ `txn.ForeignApplications`) is _available_.
+
+ * The `global CurrentApplicationID` and `txn.Sender` are _available_.
+
+ * Prior to v4, all assets were considered _available_ to the
+ `asset_holding_get` opcode.
+
+ * Since v6, any asset or contract that was created earlier in the
+ same transaction group is _available_. In addition, any account
+ that is the contract account of a contract that was created earlier
+ in the group is _available_.
## Constants
-Constants are loaded into the environment into storage separate from the stack. They can then be pushed onto the stack by referring to the type and index. This makes for efficient re-use of byte constants used for account addresses, etc. Constants that are not reused can be pushed with `pushint` or `pushbytes`.
+Constants are loaded into storage separate from the stack and scratch space. They can then be pushed onto the stack by referring to the type and index. This makes for efficient re-use of byte constants used for account addresses, etc. Constants that are not reused can be pushed with `pushint` or `pushbytes`.
The assembler will hide most of this, allowing simple use of `int 1234` and `byte 0xcafed00d`. These constants will automatically get assembled into int and byte pages of constants, de-duplicated, and operations to load them from constant storage space inserted.
-Constants are loaded into the environment by two opcodes, `intcblock` and `bytecblock`. Both of these use [proto-buf style variable length unsigned int](https://developers.google.com/protocol-buffers/docs/encoding#varint), reproduced [here](#varuint). The `intcblock` opcode is followed by a varuint specifying the length of the array and then that number of varuint. The `bytecblock` opcode is followed by a varuint array length then that number of pairs of (varuint, bytes) length prefixed byte strings. This should efficiently load 32 and 64 byte constants which will be common as addresses, hashes, and signatures.
+Constants are prepared by two opcodes, `intcblock` and `bytecblock`. Both of these use [proto-buf style variable length unsigned int](https://developers.google.com/protocol-buffers/docs/encoding#varint), reproduced [here](#varuint). The `intcblock` opcode is followed by a varuint specifying the length of the array and then that number of varuint. The `bytecblock` opcode is followed by a varuint array length then that number of pairs of (varuint, bytes) length prefixed byte strings.
Constants are pushed onto the stack by `intc`, `intc_[0123]`, `pushint`, `bytec`, `bytec_[0123]`, and `pushbytes`. The assembler will handle converting `int N` or `byte N` into the appropriate form of the instruction needed.
@@ -63,8 +148,8 @@ Constants are pushed onto the stack by `intc`, `intc_[0123]`, `pushint`, `bytec`
An application transaction must indicate the action to be taken following the execution of its approvalProgram or clearStateProgram. The constants below describe the available actions.
-| Value | Constant name | Description |
-| --- | --- | --- |
+| Value | Name | Description |
+| - | ---- | -------- |
| 0 | NoOp | Only execute the `ApprovalProgram` associated with this application ID, with no additional effects. |
| 1 | OptIn | Before executing the `ApprovalProgram`, allocate local state for this application into the sender's account data. |
| 2 | CloseOut | After executing the `ApprovalProgram`, clear any local state for this application out of the sender's account data. |
@@ -73,8 +158,9 @@ An application transaction must indicate the action to be taken following the ex
| 5 | DeleteApplication | After executing the `ApprovalProgram`, delete the application parameters from the account data of the application's creator. |
#### TypeEnum constants
-| Value | Constant name | Description |
-| --- | --- | --- |
+
+| Value | Name | Description |
+| - | --- | ------ |
| 0 | unknown | Unknown type. Invalid transaction |
| 1 | pay | Payment |
| 2 | keyreg | KeyRegistration |
@@ -86,35 +172,39 @@ An application transaction must indicate the action to be taken following the ex
## Operations
-Most operations work with only one type of argument, uint64 or bytes, and panic if the wrong type value is on the stack.
-
-Many instructions accept values to designate Accounts, Assets, or Applications. Beginning with TEAL v4, these values may always be given as an _offset_ in the corresponding Txn fields (Txn.Accounts, Txn.ForeignAssets, Txn.ForeignApps) _or_ as the value itself (a bytes address for Accounts, or a uint64 ID). The values, however, must still be present in the Txn fields. Before TEAL v4, most opcodes required the use of an offset, except for reading account local values of assets or applications, which accepted the IDs directly and did not require the ID to be present in they corresponding _Foreign_ array. (Note that beginning with TEAL v4, those IDs are required to be present in their corresponding _Foreign_ array.) See individual opcodes for details. In the case of account offsets or application offsets, 0 is specially defined to Txn.Sender or the ID of the current application, respectively.
+Most operations work with only one type of argument, uint64 or bytes, and fail if the wrong type value is on the stack.
-Many programs need only a few dozen instructions. The instruction set has some optimization built in. `intc`, `bytec`, and `arg` take an immediate value byte, making a 2-byte op to load a value onto the stack, but they also have single byte versions for loading the most common constant values. Any program will benefit from having a few common values loaded with a smaller one byte opcode. Cryptographic hashes and `ed25519verify` are single byte opcodes with powerful libraries behind them. These operations still take more time than other ops (and this is reflected in the cost of each op and the cost limit of a program) but are efficient in compiled code space.
+Many instructions accept values to designate Accounts, Assets, or Applications. Beginning with v4, these values may be given as an _offset_ in the corresponding Txn fields (Txn.Accounts, Txn.ForeignAssets, Txn.ForeignApps) _or_ as the value itself (a byte-array address for Accounts, or a uint64 ID). The values, however, must still be present in the Txn fields. Before v4, most opcodes required the use of an offset, except for reading account local values of assets or applications, which accepted the IDs directly and did not require the ID to be present in they corresponding _Foreign_ array. (Note that beginning with v4, those IDs _are_ required to be present in their corresponding _Foreign_ array.) See individual opcodes for details. In the case of account offsets or application offsets, 0 is specially defined to Txn.Sender or the ID of the current application, respectively.
This summary is supplemented by more detail in the [opcodes document](TEAL_opcodes.md).
-Some operations 'panic' and immediately fail the program.
-A transaction checked by a program that panics is not valid.
-A contract account governed by a buggy program might not have a way to get assets back out of it. Code carefully.
+Some operations immediately fail the program.
+A transaction checked by a program that fails is not valid.
+An account governed by a buggy program might not have a way to get assets back out of it. Code carefully.
-### Arithmetic, Logic, and Cryptographic Operations
-
-For one-argument ops, `X` is the last element on the stack, which is typically replaced by a new value.
+In the documentation for each opcode, the stack arguments that are
+popped are referred to alphabetically, beginning with the deepest
+argument as `A`. These arguments are shown in the opcode description,
+and if the opcode must be of a specific type, it is noted there. All
+opcodes fail if a specified type is incorrect.
-For two-argument ops, `A` is the penultimate element on the stack and `B` is the top of the stack. These typically result in popping A and B from the stack and pushing the result.
+If an opcode pushes more than one result, the values are named for
+ease of exposition and clarity concerning their stack positions. When
+an opcode manipulates the stack in such a way that a value changes
+position but is otherwise unchanged, the name of the output on the
+return stack matches the name of the input value.
-For three-argument ops, `A` is the element two below the top, `B` is the penultimate stack element and `C` is the top of the stack. These operations typically pop A, B, and C from the stack and push the result.
+### Arithmetic, Logic, and Cryptographic Operations
-| Op | Description |
-| --- | --- |
-| `sha256` | SHA256 hash of value X, yields [32]byte |
-| `keccak256` | Keccak256 hash of value X, yields [32]byte |
-| `sha512_256` | SHA512_256 hash of value X, yields [32]byte |
+| Opcode | Description |
+| - | -- |
+| `sha256` | SHA256 hash of value A, yields [32]byte |
+| `keccak256` | Keccak256 hash of value A, yields [32]byte |
+| `sha512_256` | SHA512_256 hash of value A, yields [32]byte |
| `ed25519verify` | for (data A, signature B, pubkey C) verify the signature of ("ProgData" \|\| program_hash \|\| data) against the pubkey => {0 or 1} |
| `ecdsa_verify v` | for (data A, signature B, C and pubkey D, E) verify the signature of the data against the pubkey => {0 or 1} |
-| `ecdsa_pk_recover v` | for (data A, recovery id B, signature C, D) recover a public key => [*... stack*, X, Y] |
-| `ecdsa_pk_decompress v` | decompress pubkey A into components X, Y => [*... stack*, X, Y] |
+| `ecdsa_pk_recover v` | for (data A, recovery id B, signature C, D) recover a public key |
+| `ecdsa_pk_decompress v` | decompress pubkey A into components X, Y |
| `+` | A plus B. Fail on overflow. |
| `-` | A minus B. Fail if B > A. |
| `/` | A divided by B (truncated division). Fail if B == 0. |
@@ -127,84 +217,82 @@ For three-argument ops, `A` is the element two below the top, `B` is the penulti
| `\|\|` | A is not zero or B is not zero => {0 or 1} |
| `shl` | A times 2^B, modulo 2^64 |
| `shr` | A divided by 2^B |
-| `sqrt` | The largest integer B such that B^2 <= X |
-| `bitlen` | The highest set bit in X. If X is a byte-array, it is interpreted as a big-endian unsigned integer. bitlen of 0 is 0, bitlen of 8 is 4 |
+| `sqrt` | The largest integer I such that I^2 <= A |
+| `bitlen` | The highest set bit in A. If A is a byte-array, it is interpreted as a big-endian unsigned integer. bitlen of 0 is 0, bitlen of 8 is 4 |
| `exp` | A raised to the Bth power. Fail if A == B == 0 and on overflow |
| `==` | A is equal to B => {0 or 1} |
| `!=` | A is not equal to B => {0 or 1} |
-| `!` | X == 0 yields 1; else 0 |
-| `len` | yields length of byte value X |
-| `itob` | converts uint64 X to big endian bytes |
-| `btoi` | converts bytes X as big endian to uint64 |
+| `!` | A == 0 yields 1; else 0 |
+| `len` | yields length of byte value A |
+| `itob` | converts uint64 A to big endian bytes |
+| `btoi` | converts bytes A as big endian to uint64 |
| `%` | A modulo B. Fail if B == 0. |
| `\|` | A bitwise-or B |
| `&` | A bitwise-and B |
| `^` | A bitwise-xor B |
-| `~` | bitwise invert value X |
-| `mulw` | A times B out to 128-bit long result as low (top) and high uint64 values on the stack |
-| `addw` | A plus B out to 128-bit long result as sum (top) and carry-bit uint64 values on the stack |
-| `divmodw` | Pop four uint64 values. The deepest two are interpreted as a uint128 dividend (deepest value is high word), the top two are interpreted as a uint128 divisor. Four uint64 values are pushed to the stack. The deepest two are the quotient (deeper value is the high uint64). The top two are the remainder, low bits on top. |
-| `expw` | A raised to the Bth power as a 128-bit long result as low (top) and high uint64 values on the stack. Fail if A == B == 0 or if the results exceeds 2^128-1 |
-| `getbit` | pop a target A (integer or byte-array), and index B. Push the Bth bit of A. |
-| `setbit` | pop a target A, index B, and bit C. Set the Bth bit of A to C, and push the result |
-| `getbyte` | pop a byte-array A and integer B. Extract the Bth byte of A and push it as an integer |
-| `setbyte` | pop a byte-array A, integer B, and small integer C (between 0..255). Set the Bth byte of A to C, and push the result |
-| `concat` | pop two byte-arrays A and B and join them, push the result |
-
-These opcodes return portions of byte arrays, accessed by position, in
-various sizes.
+| `~` | bitwise invert value A |
+| `mulw` | A times B as a 128-bit result in two uint64s. X is the high 64 bits, Y is the low |
+| `addw` | A plus B as a 128-bit result. X is the carry-bit, Y is the low-order 64 bits. |
+| `divmodw` | W,X = (A,B / C,D); Y,Z = (A,B modulo C,D) |
+| `expw` | A raised to the Bth power as a 128-bit result in two uint64s. X is the high 64 bits, Y is the low. Fail if A == B == 0 or if the results exceeds 2^128-1 |
+| `getbit` | Bth bit of (byte-array or integer) A. |
+| `setbit` | Copy of (byte-array or integer) A, with the Bth bit set to (0 or 1) C |
+| `getbyte` | Bth byte of A, as an integer |
+| `setbyte` | Copy of A with the Bth byte set to small integer (between 0..255) C |
+| `concat` | join A and B |
### Byte Array Manipulation
-| Op | Description |
-| --- | --- |
-| `substring s e` | pop a byte-array A. For immediate values in 0..255 S and E: extract a range of bytes from A starting at S up to but not including E, push the substring result. If E < S, or either is larger than the array length, the program fails |
-| `substring3` | pop a byte-array A and two integers B and C. Extract a range of bytes from A starting at B up to but not including C, push the substring result. If C < B, or either is larger than the array length, the program fails |
-| `extract s l` | pop a byte-array A. For immediate values in 0..255 S and L: extract a range of bytes from A starting at S up to but not including S+L, push the substring result. If L is 0, then extract to the end of the string. If S or S+L is larger than the array length, the program fails |
-| `extract3` | pop a byte-array A and two integers B and C. Extract a range of bytes from A starting at B up to but not including B+C, push the substring result. If B+C is larger than the array length, the program fails |
-| `extract_uint16` | pop a byte-array A and integer B. Extract a range of bytes from A starting at B up to but not including B+2, convert bytes as big endian and push the uint64 result. If B+2 is larger than the array length, the program fails |
-| `extract_uint32` | pop a byte-array A and integer B. Extract a range of bytes from A starting at B up to but not including B+4, convert bytes as big endian and push the uint64 result. If B+4 is larger than the array length, the program fails |
-| `extract_uint64` | pop a byte-array A and integer B. Extract a range of bytes from A starting at B up to but not including B+8, convert bytes as big endian and push the uint64 result. If B+8 is larger than the array length, the program fails |
-| `base64_decode e` | decode X which was base64-encoded using _encoding_ E. Fail if X is not base64 encoded with encoding E |
-
-These opcodes take byte-array values that are interpreted as
+| Opcode | Description |
+| - | -- |
+| `substring s e` | A range of bytes from A starting at S up to but not including E. If E < S, or either is larger than the array length, the program fails |
+| `substring3` | A range of bytes from A starting at B up to but not including C. If C < B, or either is larger than the array length, the program fails |
+| `extract s l` | A range of bytes from A starting at S up to but not including S+L. If L is 0, then extract to the end of the string. If S or S+L is larger than the array length, the program fails |
+| `extract3` | A range of bytes from A starting at B up to but not including B+C. If B+C is larger than the array length, the program fails |
+| `extract_uint16` | A uint16 formed from a range of big-endian bytes from A starting at B up to but not including B+2. If B+2 is larger than the array length, the program fails |
+| `extract_uint32` | A uint32 formed from a range of big-endian bytes from A starting at B up to but not including B+4. If B+4 is larger than the array length, the program fails |
+| `extract_uint64` | A uint64 formed from a range of big-endian bytes from A starting at B up to but not including B+8. If B+8 is larger than the array length, the program fails |
+| `base64_decode e` | decode A which was base64-encoded using _encoding_ E. Fail if A is not base64 encoded with encoding E |
+
+The following opcodes take byte-array values that are interpreted as
big-endian unsigned integers. For mathematical operators, the
returned values are the shortest byte-array that can represent the
returned value. For example, the zero value is the empty
-byte-array. For comparison operators, the returned value is a uint64
+byte-array. For comparison operators, the returned value is a uint64.
-Input lengths are limited to a maximum length 64 bytes, which
-represents a 512 bit unsigned integer. Output lengths are not
+Input lengths are limited to a maximum length of 64 bytes,
+representing a 512 bit unsigned integer. Output lengths are not
explicitly restricted, though only `b*` and `b+` can produce a larger
output than their inputs, so there is an implicit length limit of 128
bytes on outputs.
-| Op | Description |
-| --- | --- |
-| `b+` | A plus B, where A and B are byte-arrays interpreted as big-endian unsigned integers |
-| `b-` | A minus B, where A and B are byte-arrays interpreted as big-endian unsigned integers. Fail on underflow. |
-| `b/` | A divided by B (truncated division), where A and B are byte-arrays interpreted as big-endian unsigned integers. Fail if B is zero. |
-| `b*` | A times B, where A and B are byte-arrays interpreted as big-endian unsigned integers. |
-| `b<` | A is less than B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1} |
-| `b>` | A is greater than B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1} |
-| `b<=` | A is less than or equal to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1} |
-| `b>=` | A is greater than or equal to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1} |
-| `b==` | A is equals to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1} |
-| `b!=` | A is not equal to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1} |
-| `b%` | A modulo B, where A and B are byte-arrays interpreted as big-endian unsigned integers. Fail if B is zero. |
+| Opcode | Description |
+| - | -- |
+| `b+` | A plus B. A and B are interpreted as big-endian unsigned integers |
+| `b-` | A minus B. A and B are interpreted as big-endian unsigned integers. Fail on underflow. |
+| `b/` | A divided by B (truncated division). A and B are interpreted as big-endian unsigned integers. Fail if B is zero. |
+| `b*` | A times B. A and B are interpreted as big-endian unsigned integers. |
+| `b<` | 1 if A is less than B, else 0. A and B are interpreted as big-endian unsigned integers |
+| `b>` | 1 if A is greater than B, else 0. A and B are interpreted as big-endian unsigned integers |
+| `b<=` | 1 if A is less than or equal to B, else 0. A and B are interpreted as big-endian unsigned integers |
+| `b>=` | 1 if A is greater than or equal to B, else 0. A and B are interpreted as big-endian unsigned integers |
+| `b==` | 1 if A is equal to B, else 0. A and B are interpreted as big-endian unsigned integers |
+| `b!=` | 0 if A is equal to B, else 1. A and B are interpreted as big-endian unsigned integers |
+| `b%` | A modulo B. A and B are interpreted as big-endian unsigned integers. Fail if B is zero. |
+| `bsqrt` | The largest integer I such that I^2 <= A. A and I are interpreted as big-endian unsigned integers |
These opcodes operate on the bits of byte-array values. The shorter
-array is interpreted as though left padded with zeros until it is the
+input array is interpreted as though left padded with zeros until it is the
same length as the other input. The returned values are the same
-length as the longest input. Therefore, unlike array arithmetic,
+length as the longer input. Therefore, unlike array arithmetic,
these results may contain leading zero bytes.
-| Op | Description |
-| --- | --- |
-| `b\|` | A bitwise-or B, where A and B are byte-arrays, zero-left extended to the greater of their lengths |
-| `b&` | A bitwise-and B, where A and B are byte-arrays, zero-left extended to the greater of their lengths |
-| `b^` | A bitwise-xor B, where A and B are byte-arrays, zero-left extended to the greater of their lengths |
-| `b~` | X with all bits inverted |
+| Opcode | Description |
+| - | -- |
+| `b\|` | A bitwise-or B. A and B are zero-left extended to the greater of their lengths |
+| `b&` | A bitwise-and B. A and B are zero-left extended to the greater of their lengths |
+| `b^` | A bitwise-xor B. A and B are zero-left extended to the greater of their lengths |
+| `b~` | A with all bits inverted |
### Loading Values
@@ -212,114 +300,115 @@ Opcodes for getting data onto the stack.
Some of these have immediate data in the byte or bytes after the opcode.
-| Op | Description |
-| --- | --- |
+| Opcode | Description |
+| - | -- |
| `intcblock uint ...` | prepare block of uint64 constants for use by intc |
-| `intc i` | push Ith constant from intcblock to stack |
-| `intc_0` | push constant 0 from intcblock to stack |
-| `intc_1` | push constant 1 from intcblock to stack |
-| `intc_2` | push constant 2 from intcblock to stack |
-| `intc_3` | push constant 3 from intcblock to stack |
-| `pushint uint` | push immediate UINT to the stack as an integer |
+| `intc i` | Ith constant from intcblock |
+| `intc_0` | constant 0 from intcblock |
+| `intc_1` | constant 1 from intcblock |
+| `intc_2` | constant 2 from intcblock |
+| `intc_3` | constant 3 from intcblock |
+| `pushint uint` | immediate UINT |
| `bytecblock bytes ...` | prepare block of byte-array constants for use by bytec |
-| `bytec i` | push Ith constant from bytecblock to stack |
-| `bytec_0` | push constant 0 from bytecblock to stack |
-| `bytec_1` | push constant 1 from bytecblock to stack |
-| `bytec_2` | push constant 2 from bytecblock to stack |
-| `bytec_3` | push constant 3 from bytecblock to stack |
-| `pushbytes bytes` | push the following program bytes to the stack |
-| `bzero` | push a byte-array of length X, containing all zero bytes |
-| `arg n` | push Nth LogicSig argument to stack |
-| `arg_0` | push LogicSig argument 0 to stack |
-| `arg_1` | push LogicSig argument 1 to stack |
-| `arg_2` | push LogicSig argument 2 to stack |
-| `arg_3` | push LogicSig argument 3 to stack |
-| `args` | push Xth LogicSig argument to stack |
-| `txn f` | push field F of current transaction to stack |
-| `gtxn t f` | push field F of the Tth transaction in the current group |
-| `txna f i` | push Ith value of the array field F of the current transaction |
-| `txnas f` | push Xth value of the array field F of the current transaction |
-| `gtxna t f i` | push Ith value of the array field F from the Tth transaction in the current group |
-| `gtxnas t f` | push Xth value of the array field F from the Tth transaction in the current group |
-| `gtxns f` | push field F of the Xth transaction in the current group |
-| `gtxnsa f i` | push Ith value of the array field F from the Xth transaction in the current group |
-| `gtxnsas f` | pop an index A and an index B. push Bth value of the array field F from the Ath transaction in the current group |
-| `global f` | push value from globals to stack |
-| `load i` | copy a value from scratch space to the stack. All scratch spaces are 0 at program start. |
-| `loads` | copy a value from the Xth scratch space to the stack. All scratch spaces are 0 at program start. |
-| `store i` | pop value X. store X to the Ith scratch space |
-| `stores` | pop indexes A and B. store B to the Ath scratch space |
-| `gload t i` | push Ith scratch space index of the Tth transaction in the current group |
-| `gloads i` | push Ith scratch space index of the Xth transaction in the current group |
-| `gaid t` | push the ID of the asset or application created in the Tth transaction of the current group |
-| `gaids` | push the ID of the asset or application created in the Xth transaction of the current group |
+| `bytec i` | Ith constant from bytecblock |
+| `bytec_0` | constant 0 from bytecblock |
+| `bytec_1` | constant 1 from bytecblock |
+| `bytec_2` | constant 2 from bytecblock |
+| `bytec_3` | constant 3 from bytecblock |
+| `pushbytes bytes` | immediate BYTES |
+| `bzero` | zero filled byte-array of length A |
+| `arg n` | Nth LogicSig argument |
+| `arg_0` | LogicSig argument 0 |
+| `arg_1` | LogicSig argument 1 |
+| `arg_2` | LogicSig argument 2 |
+| `arg_3` | LogicSig argument 3 |
+| `args` | Ath LogicSig argument |
+| `txn f` | field F of current transaction |
+| `gtxn t f` | field F of the Tth transaction in the current group |
+| `txna f i` | Ith value of the array field F of the current transaction |
+| `txnas f` | Ath value of the array field F of the current transaction |
+| `gtxna t f i` | Ith value of the array field F from the Tth transaction in the current group |
+| `gtxnas t f` | Ath value of the array field F from the Tth transaction in the current group |
+| `gtxns f` | field F of the Ath transaction in the current group |
+| `gtxnsa f i` | Ith value of the array field F from the Ath transaction in the current group |
+| `gtxnsas f` | Bth value of the array field F from the Ath transaction in the current group |
+| `global f` | global field F |
+| `load i` | Ith scratch space value. All scratch spaces are 0 at program start. |
+| `loads` | Ath scratch space value. All scratch spaces are 0 at program start. |
+| `store i` | store A to the Ith scratch space |
+| `stores` | store B to the Ath scratch space |
+| `gload t i` | Ith scratch space value of the Tth transaction in the current group |
+| `gloads i` | Ith scratch space value of the Ath transaction in the current group |
+| `gloadss` | Bth scratch space value of the Ath transaction in the current group |
+| `gaid t` | ID of the asset or application created in the Tth transaction of the current group |
+| `gaids` | ID of the asset or application created in the Ath transaction of the current group |
**Transaction Fields**
-| Index | Name | Type | Notes |
-| --- | --- | --- | --- |
-| 0 | Sender | []byte | 32 byte address |
-| 1 | Fee | uint64 | micro-Algos |
-| 2 | FirstValid | uint64 | round number |
-| 3 | FirstValidTime | uint64 | Causes program to fail; reserved for future use |
-| 4 | LastValid | uint64 | round number |
-| 5 | Note | []byte | Any data up to 1024 bytes |
-| 6 | Lease | []byte | 32 byte lease value |
-| 7 | Receiver | []byte | 32 byte address |
-| 8 | Amount | uint64 | micro-Algos |
-| 9 | CloseRemainderTo | []byte | 32 byte address |
-| 10 | VotePK | []byte | 32 byte address |
-| 11 | SelectionPK | []byte | 32 byte address |
-| 12 | VoteFirst | uint64 | The first round that the participation key is valid. |
-| 13 | VoteLast | uint64 | The last round that the participation key is valid. |
-| 14 | VoteKeyDilution | uint64 | Dilution for the 2-level participation key |
-| 15 | Type | []byte | Transaction type as bytes |
-| 16 | TypeEnum | uint64 | See table below |
-| 17 | XferAsset | uint64 | Asset ID |
-| 18 | AssetAmount | uint64 | value in Asset's units |
-| 19 | AssetSender | []byte | 32 byte address. Causes clawback of all value of asset from AssetSender if Sender is the Clawback address of the asset. |
-| 20 | AssetReceiver | []byte | 32 byte address |
-| 21 | AssetCloseTo | []byte | 32 byte address |
-| 22 | GroupIndex | uint64 | Position of this transaction within an atomic transaction group. A stand-alone transaction is implicitly element 0 in a group of 1 |
-| 23 | TxID | []byte | The computed ID for this transaction. 32 bytes. |
-| 24 | ApplicationID | uint64 | ApplicationID from ApplicationCall transaction. LogicSigVersion >= 2. |
-| 25 | OnCompletion | uint64 | ApplicationCall transaction on completion action. LogicSigVersion >= 2. |
-| 26 | ApplicationArgs | []byte | Arguments passed to the application in the ApplicationCall transaction. LogicSigVersion >= 2. |
-| 27 | NumAppArgs | uint64 | Number of ApplicationArgs. LogicSigVersion >= 2. |
-| 28 | Accounts | []byte | Accounts listed in the ApplicationCall transaction. LogicSigVersion >= 2. |
-| 29 | NumAccounts | uint64 | Number of Accounts. LogicSigVersion >= 2. |
-| 30 | ApprovalProgram | []byte | Approval program. LogicSigVersion >= 2. |
-| 31 | ClearStateProgram | []byte | Clear state program. LogicSigVersion >= 2. |
-| 32 | RekeyTo | []byte | 32 byte Sender's new AuthAddr. LogicSigVersion >= 2. |
-| 33 | ConfigAsset | uint64 | Asset ID in asset config transaction. LogicSigVersion >= 2. |
-| 34 | ConfigAssetTotal | uint64 | Total number of units of this asset created. LogicSigVersion >= 2. |
-| 35 | ConfigAssetDecimals | uint64 | Number of digits to display after the decimal place when displaying the asset. LogicSigVersion >= 2. |
-| 36 | ConfigAssetDefaultFrozen | uint64 | Whether the asset's slots are frozen by default or not, 0 or 1. LogicSigVersion >= 2. |
-| 37 | ConfigAssetUnitName | []byte | Unit name of the asset. LogicSigVersion >= 2. |
-| 38 | ConfigAssetName | []byte | The asset name. LogicSigVersion >= 2. |
-| 39 | ConfigAssetURL | []byte | URL. LogicSigVersion >= 2. |
-| 40 | ConfigAssetMetadataHash | []byte | 32 byte commitment to some unspecified asset metadata. LogicSigVersion >= 2. |
-| 41 | ConfigAssetManager | []byte | 32 byte address. LogicSigVersion >= 2. |
-| 42 | ConfigAssetReserve | []byte | 32 byte address. LogicSigVersion >= 2. |
-| 43 | ConfigAssetFreeze | []byte | 32 byte address. LogicSigVersion >= 2. |
-| 44 | ConfigAssetClawback | []byte | 32 byte address. LogicSigVersion >= 2. |
-| 45 | FreezeAsset | uint64 | Asset ID being frozen or un-frozen. LogicSigVersion >= 2. |
-| 46 | FreezeAssetAccount | []byte | 32 byte address of the account whose asset slot is being frozen or un-frozen. LogicSigVersion >= 2. |
-| 47 | FreezeAssetFrozen | uint64 | The new frozen value, 0 or 1. LogicSigVersion >= 2. |
-| 48 | Assets | uint64 | Foreign Assets listed in the ApplicationCall transaction. LogicSigVersion >= 3. |
-| 49 | NumAssets | uint64 | Number of Assets. LogicSigVersion >= 3. |
-| 50 | Applications | uint64 | Foreign Apps listed in the ApplicationCall transaction. LogicSigVersion >= 3. |
-| 51 | NumApplications | uint64 | Number of Applications. LogicSigVersion >= 3. |
-| 52 | GlobalNumUint | uint64 | Number of global state integers in ApplicationCall. LogicSigVersion >= 3. |
-| 53 | GlobalNumByteSlice | uint64 | Number of global state byteslices in ApplicationCall. LogicSigVersion >= 3. |
-| 54 | LocalNumUint | uint64 | Number of local state integers in ApplicationCall. LogicSigVersion >= 3. |
-| 55 | LocalNumByteSlice | uint64 | Number of local state byteslices in ApplicationCall. LogicSigVersion >= 3. |
-| 56 | ExtraProgramPages | uint64 | Number of additional pages for each of the application's approval and clear state programs. An ExtraProgramPages of 1 means 2048 more total bytes, or 1024 for each program. LogicSigVersion >= 4. |
-| 57 | Nonparticipation | uint64 | Marks an account nonparticipating for rewards. LogicSigVersion >= 5. |
-| 58 | Logs | []byte | Log messages emitted by an application call (itxn only). LogicSigVersion >= 5. |
-| 59 | NumLogs | uint64 | Number of Logs (itxn only). LogicSigVersion >= 5. |
-| 60 | CreatedAssetID | uint64 | Asset ID allocated by the creation of an ASA (itxn only). LogicSigVersion >= 5. |
-| 61 | CreatedApplicationID | uint64 | ApplicationID allocated by the creation of an application (itxn only). LogicSigVersion >= 5. |
+| Index | Name | Type | In | Notes |
+| - | ------ | -- | - | --------- |
+| 0 | Sender | []byte | | 32 byte address |
+| 1 | Fee | uint64 | | microalgos |
+| 2 | FirstValid | uint64 | | round number |
+| 3 | FirstValidTime | uint64 | | Causes program to fail; reserved for future use |
+| 4 | LastValid | uint64 | | round number |
+| 5 | Note | []byte | | Any data up to 1024 bytes |
+| 6 | Lease | []byte | | 32 byte lease value |
+| 7 | Receiver | []byte | | 32 byte address |
+| 8 | Amount | uint64 | | microalgos |
+| 9 | CloseRemainderTo | []byte | | 32 byte address |
+| 10 | VotePK | []byte | | 32 byte address |
+| 11 | SelectionPK | []byte | | 32 byte address |
+| 12 | VoteFirst | uint64 | | The first round that the participation key is valid. |
+| 13 | VoteLast | uint64 | | The last round that the participation key is valid. |
+| 14 | VoteKeyDilution | uint64 | | Dilution for the 2-level participation key |
+| 15 | Type | []byte | | Transaction type as bytes |
+| 16 | TypeEnum | uint64 | | See table below |
+| 17 | XferAsset | uint64 | | Asset ID |
+| 18 | AssetAmount | uint64 | | value in Asset's units |
+| 19 | AssetSender | []byte | | 32 byte address. Causes clawback of all value of asset from AssetSender if Sender is the Clawback address of the asset. |
+| 20 | AssetReceiver | []byte | | 32 byte address |
+| 21 | AssetCloseTo | []byte | | 32 byte address |
+| 22 | GroupIndex | uint64 | | Position of this transaction within an atomic transaction group. A stand-alone transaction is implicitly element 0 in a group of 1 |
+| 23 | TxID | []byte | | The computed ID for this transaction. 32 bytes. |
+| 24 | ApplicationID | uint64 | v2 | ApplicationID from ApplicationCall transaction |
+| 25 | OnCompletion | uint64 | v2 | ApplicationCall transaction on completion action |
+| 26 | ApplicationArgs | []byte | v2 | Arguments passed to the application in the ApplicationCall transaction |
+| 27 | NumAppArgs | uint64 | v2 | Number of ApplicationArgs |
+| 28 | Accounts | []byte | v2 | Accounts listed in the ApplicationCall transaction |
+| 29 | NumAccounts | uint64 | v2 | Number of Accounts |
+| 30 | ApprovalProgram | []byte | v2 | Approval program |
+| 31 | ClearStateProgram | []byte | v2 | Clear state program |
+| 32 | RekeyTo | []byte | v2 | 32 byte Sender's new AuthAddr |
+| 33 | ConfigAsset | uint64 | v2 | Asset ID in asset config transaction |
+| 34 | ConfigAssetTotal | uint64 | v2 | Total number of units of this asset created |
+| 35 | ConfigAssetDecimals | uint64 | v2 | Number of digits to display after the decimal place when displaying the asset |
+| 36 | ConfigAssetDefaultFrozen | uint64 | v2 | Whether the asset's slots are frozen by default or not, 0 or 1 |
+| 37 | ConfigAssetUnitName | []byte | v2 | Unit name of the asset |
+| 38 | ConfigAssetName | []byte | v2 | The asset name |
+| 39 | ConfigAssetURL | []byte | v2 | URL |
+| 40 | ConfigAssetMetadataHash | []byte | v2 | 32 byte commitment to some unspecified asset metadata |
+| 41 | ConfigAssetManager | []byte | v2 | 32 byte address |
+| 42 | ConfigAssetReserve | []byte | v2 | 32 byte address |
+| 43 | ConfigAssetFreeze | []byte | v2 | 32 byte address |
+| 44 | ConfigAssetClawback | []byte | v2 | 32 byte address |
+| 45 | FreezeAsset | uint64 | v2 | Asset ID being frozen or un-frozen |
+| 46 | FreezeAssetAccount | []byte | v2 | 32 byte address of the account whose asset slot is being frozen or un-frozen |
+| 47 | FreezeAssetFrozen | uint64 | v2 | The new frozen value, 0 or 1 |
+| 48 | Assets | uint64 | v3 | Foreign Assets listed in the ApplicationCall transaction |
+| 49 | NumAssets | uint64 | v3 | Number of Assets |
+| 50 | Applications | uint64 | v3 | Foreign Apps listed in the ApplicationCall transaction |
+| 51 | NumApplications | uint64 | v3 | Number of Applications |
+| 52 | GlobalNumUint | uint64 | v3 | Number of global state integers in ApplicationCall |
+| 53 | GlobalNumByteSlice | uint64 | v3 | Number of global state byteslices in ApplicationCall |
+| 54 | LocalNumUint | uint64 | v3 | Number of local state integers in ApplicationCall |
+| 55 | LocalNumByteSlice | uint64 | v3 | Number of local state byteslices in ApplicationCall |
+| 56 | ExtraProgramPages | uint64 | v4 | Number of additional pages for each of the application's approval and clear state programs. An ExtraProgramPages of 1 means 2048 more total bytes, or 1024 for each program. |
+| 57 | Nonparticipation | uint64 | v5 | Marks an account nonparticipating for rewards |
+| 58 | Logs | []byte | v5 | Log messages emitted by an application call (`itxn` only until v6). Application mode only |
+| 59 | NumLogs | uint64 | v5 | Number of Logs (`itxn` only until v6). Application mode only |
+| 60 | CreatedAssetID | uint64 | v5 | Asset ID allocated by the creation of an ASA (`itxn` only until v6). Application mode only |
+| 61 | CreatedApplicationID | uint64 | v5 | ApplicationID allocated by the creation of an application (`itxn` only until v6). Application mode only |
Additional details in the [opcodes document](TEAL_opcodes.md#txn) on the `txn` op.
@@ -328,20 +417,23 @@ Additional details in the [opcodes document](TEAL_opcodes.md#txn) on the `txn` o
Global fields are fields that are common to all the transactions in the group. In particular it includes consensus parameters.
-| Index | Name | Type | Notes |
-| --- | --- | --- | --- |
-| 0 | MinTxnFee | uint64 | micro Algos |
-| 1 | MinBalance | uint64 | micro Algos |
-| 2 | MaxTxnLife | uint64 | rounds |
-| 3 | ZeroAddress | []byte | 32 byte address of all zero bytes |
-| 4 | GroupSize | uint64 | Number of transactions in this atomic transaction group. At least 1 |
-| 5 | LogicSigVersion | uint64 | Maximum supported TEAL version. LogicSigVersion >= 2. |
-| 6 | Round | uint64 | Current round number. LogicSigVersion >= 2. |
-| 7 | LatestTimestamp | uint64 | Last confirmed block UNIX timestamp. Fails if negative. LogicSigVersion >= 2. |
-| 8 | CurrentApplicationID | uint64 | ID of current application executing. Fails in LogicSigs. LogicSigVersion >= 2. |
-| 9 | CreatorAddress | []byte | Address of the creator of the current application. Fails if no such application is executing. LogicSigVersion >= 3. |
-| 10 | CurrentApplicationAddress | []byte | Address that the current application controls. Fails in LogicSigs. LogicSigVersion >= 5. |
-| 11 | GroupID | []byte | ID of the transaction group. 32 zero bytes if the transaction is not part of a group. LogicSigVersion >= 5. |
+| Index | Name | Type | In | Notes |
+| - | ------ | -- | - | --------- |
+| 0 | MinTxnFee | uint64 | | microalgos |
+| 1 | MinBalance | uint64 | | microalgos |
+| 2 | MaxTxnLife | uint64 | | rounds |
+| 3 | ZeroAddress | []byte | | 32 byte address of all zero bytes |
+| 4 | GroupSize | uint64 | | Number of transactions in this atomic transaction group. At least 1 |
+| 5 | LogicSigVersion | uint64 | v2 | Maximum supported version |
+| 6 | Round | uint64 | v2 | Current round number. Application mode only. |
+| 7 | LatestTimestamp | uint64 | v2 | Last confirmed block UNIX timestamp. Fails if negative. Application mode only. |
+| 8 | CurrentApplicationID | uint64 | v2 | ID of current application executing. Application mode only. |
+| 9 | CreatorAddress | []byte | v3 | Address of the creator of the current application. Application mode only. |
+| 10 | CurrentApplicationAddress | []byte | v5 | Address that the current application controls. Application mode only. |
+| 11 | GroupID | []byte | v5 | ID of the transaction group. 32 zero bytes if the transaction is not part of a group. |
+| 12 | OpcodeBudget | uint64 | v6 | The remaining cost that can be spent by opcodes in this program. |
+| 13 | CallerApplicationID | uint64 | v6 | The application ID of the application that called this application. 0 if this application is at the top-level. Application mode only. |
+| 14 | CallerApplicationAddress | []byte | v6 | The application address of the application that called this application. ZeroAddress if this application is at the top-level. Application mode only. |
**Asset Fields**
@@ -349,25 +441,25 @@ Global fields are fields that are common to all the transactions in the group. I
Asset fields include `AssetHolding` and `AssetParam` fields that are used in the `asset_holding_get` and `asset_params_get` opcodes.
| Index | Name | Type | Notes |
-| --- | --- | --- | --- |
+| - | ------ | -- | --------- |
| 0 | AssetBalance | uint64 | Amount of the asset unit held by this account |
| 1 | AssetFrozen | uint64 | Is the asset frozen or not |
-| Index | Name | Type | Notes |
-| --- | --- | --- | --- |
-| 0 | AssetTotal | uint64 | Total number of units of this asset |
-| 1 | AssetDecimals | uint64 | See AssetParams.Decimals |
-| 2 | AssetDefaultFrozen | uint64 | Frozen by default or not |
-| 3 | AssetUnitName | []byte | Asset unit name |
-| 4 | AssetName | []byte | Asset name |
-| 5 | AssetURL | []byte | URL with additional info about the asset |
-| 6 | AssetMetadataHash | []byte | Arbitrary commitment |
-| 7 | AssetManager | []byte | Manager commitment |
-| 8 | AssetReserve | []byte | Reserve address |
-| 9 | AssetFreeze | []byte | Freeze address |
-| 10 | AssetClawback | []byte | Clawback address |
-| 11 | AssetCreator | []byte | Creator address. LogicSigVersion >= 5. |
+| Index | Name | Type | In | Notes |
+| - | ------ | -- | - | --------- |
+| 0 | AssetTotal | uint64 | | Total number of units of this asset |
+| 1 | AssetDecimals | uint64 | | See AssetParams.Decimals |
+| 2 | AssetDefaultFrozen | uint64 | | Frozen by default or not |
+| 3 | AssetUnitName | []byte | | Asset unit name |
+| 4 | AssetName | []byte | | Asset name |
+| 5 | AssetURL | []byte | | URL with additional info about the asset |
+| 6 | AssetMetadataHash | []byte | | Arbitrary commitment |
+| 7 | AssetManager | []byte | | Manager commitment |
+| 8 | AssetReserve | []byte | | Reserve address |
+| 9 | AssetFreeze | []byte | | Freeze address |
+| 10 | AssetClawback | []byte | | Clawback address |
+| 11 | AssetCreator | []byte | v5 | Creator address |
**App Fields**
@@ -375,7 +467,7 @@ Asset fields include `AssetHolding` and `AssetParam` fields that are used in the
App fields used in the `app_params_get` opcode.
| Index | Name | Type | Notes |
-| --- | --- | --- | --- |
+| - | ------ | -- | --------- |
| 0 | AppApprovalProgram | []byte | Bytecode of Approval Program |
| 1 | AppClearStateProgram | []byte | Bytecode of Clear State Program |
| 2 | AppGlobalNumUint | uint64 | Number of uint64 values allowed in Global State |
@@ -387,46 +479,58 @@ App fields used in the `app_params_get` opcode.
| 8 | AppAddress | []byte | Address for which this application has authority |
+**Account Fields**
+
+Account fields used in the `acct_params_get` opcode.
+
+| Index | Name | Type | Notes |
+| - | ------ | -- | --------- |
+| 0 | AcctBalance | uint64 | Account balance in microalgos |
+| 1 | AcctMinBalance | uint64 | Minimum required blance for account, in microalgos |
+| 2 | AcctAuthAddr | []byte | Address the account is rekeyed to. |
+
+
### Flow Control
-| Op | Description |
-| --- | --- |
-| `err` | Error. Fail immediately. This is primarily a fencepost against accidental zero bytes getting compiled into programs. |
-| `bnz target` | branch to TARGET if value X is not zero |
-| `bz target` | branch to TARGET if value X is zero |
+| Opcode | Description |
+| - | -- |
+| `err` | Fail immediately. |
+| `bnz target` | branch to TARGET if value A is not zero |
+| `bz target` | branch to TARGET if value A is zero |
| `b target` | branch unconditionally to TARGET |
-| `return` | use last value on stack as success value; end |
-| `pop` | discard value X from stack |
-| `dup` | duplicate last value on stack |
-| `dup2` | duplicate two last values on stack: A, B -> A, B, A, B |
-| `dig n` | push the Nth value from the top of the stack. dig 0 is equivalent to dup |
+| `return` | use A as success value; end |
+| `pop` | discard A |
+| `dup` | duplicate A |
+| `dup2` | duplicate A and B |
+| `dig n` | Nth value from the top of the stack. dig 0 is equivalent to dup |
| `cover n` | remove top of stack, and place it deeper in the stack such that N elements are above it. Fails if stack depth <= N. |
| `uncover n` | remove the value at depth N in the stack and shift above items down so the Nth deep value is on top of the stack. Fails if stack depth <= N. |
-| `swap` | swaps two last values on stack: A, B -> B, A |
-| `select` | selects one of two values based on top-of-stack: A, B, C -> (if C != 0 then B else A) |
-| `assert` | immediately fail unless value X is a non-zero number |
+| `swap` | swaps A and B on stack |
+| `select` | selects one of two values based on top-of-stack: B if C != 0, else A |
+| `assert` | immediately fail unless X is a non-zero number |
| `callsub target` | branch unconditionally to TARGET, saving the next instruction on the call stack |
| `retsub` | pop the top instruction from the call stack and branch to it |
### State Access
-| Op | Description |
-| --- | --- |
+| Opcode | Description |
+| - | -- |
| `balance` | get balance for account A, in microalgos. The balance is observed after the effects of previous transactions in the group, and after the fee for the current transaction is deducted. |
| `min_balance` | get minimum required balance for account A, in microalgos. Required balance is affected by [ASA](https://developer.algorand.org/docs/features/asa/#assets-overview) and [App](https://developer.algorand.org/docs/features/asc1/stateful/#minimum-balance-requirement-for-a-smart-contract) usage. When creating or opting into an app, the minimum balance grows before the app code runs, therefore the increase is visible there. When deleting or closing out, the minimum balance decreases after the app executes. |
-| `app_opted_in` | check if account A opted in for the application B => {0 or 1} |
-| `app_local_get` | read from account A from local state of the current application key B => value |
-| `app_local_get_ex` | read from account A from local state of the application B key C => [*... stack*, value, 0 or 1] |
-| `app_global_get` | read key A from global state of a current application => value |
-| `app_global_get_ex` | read from application A global state key B => [*... stack*, value, 0 or 1] |
-| `app_local_put` | write to account specified by A to local state of a current application key B with value C |
-| `app_global_put` | write key A and value B to global state of the current application |
-| `app_local_del` | delete from account A local state key B of the current application |
-| `app_global_del` | delete key A from a global state of the current application |
-| `asset_holding_get i` | read from account A and asset B holding field X (imm arg) => {0 or 1 (top), value} |
-| `asset_params_get i` | read from asset A params field X (imm arg) => {0 or 1 (top), value} |
-| `app_params_get i` | read from app A params field X (imm arg) => {0 or 1 (top), value} |
-| `log` | write bytes to log state of the current application |
+| `app_opted_in` | 1 if account A is opted in to application B, else 0 |
+| `app_local_get` | local state of the key B in the current application in account A |
+| `app_local_get_ex` | X is the local state of application B, key C in account A. Y is 1 if key existed, else 0 |
+| `app_global_get` | global state of the key A in the current application |
+| `app_global_get_ex` | X is the global state of application A, key B. Y is 1 if key existed, else 0 |
+| `app_local_put` | write C to key B in account A's local state of the current application |
+| `app_global_put` | write B to key A in the global state of the current application |
+| `app_local_del` | delete key B from account A's local state of the current application |
+| `app_global_del` | delete key A from the global state of the current application |
+| `asset_holding_get f` | X is field F from account A's holding of asset B. Y is 1 if A is opted into B, else 0 |
+| `asset_params_get f` | X is field F from asset A. Y is 1 if A exists, else 0 |
+| `app_params_get f` | X is field F from app A. Y is 1 if A exists, else 0 |
+| `acct_params_get f` | X is field F from account A. Y is 1 if A owns positive algos, else 0 |
+| `log` | write A to log state of the current application |
### Inner Transactions
@@ -436,50 +540,59 @@ of a true top-level transaction, programatically. However, they are
different in significant ways. The most important differences are
that they are not signed, duplicates are not rejected, and they do not
appear in the block in the usual away. Instead, their effects are
-noted in metadata associated with the associated top-level application
+noted in metadata associated with their top-level application
call transaction. An inner transaction's `Sender` must be the
SHA512_256 hash of the application ID (prefixed by "appID"), or an
account that has been rekeyed to that hash.
-Currently, inner transactions may perform `pay`, `axfer`, `acfg`, and
+In v5, inner transactions may perform `pay`, `axfer`, `acfg`, and
`afrz` effects. After executing an inner transaction with
`itxn_submit`, the effects of the transaction are visible begining
with the next instruction with, for example, `balance` and
-`min_balance` checks.
+`min_balance` checks. In v6, inner transactions may also perform
+`keyreg` and `appl` effects.
-Of the transaction Header fields, only a few fields may be set:
-`Type`/`TypeEnum`, `Sender`, and `Fee`. For the specific fields of
-each transaction types, any field, except `RekeyTo` may be set. This
-allows, for example, clawback transactions, asset opt-ins, and asset
-creates in addtion to the more common uses of `axfer` and `acfg`. All
-fields default to the zero value, except those described under
-`itxn_begin`.
+In v5, only a few of the Header fields may be set: `Type`/`TypeEnum`,
+`Sender`, and `Fee`. In v6, Header fields `Note` and `RekeyTo` may
+also be set. For the specific (non-header) fields of each transaction
+type, any field may be set. This allows, for example, clawback
+transactions, asset opt-ins, and asset creates in addition to the more
+common uses of `axfer` and `acfg`. All fields default to the zero
+value, except those described under `itxn_begin`.
Fields may be set multiple times, but may not be read. The most recent
-setting is used when `itxn_submit` executes. (For this purpose `Type`
-and `TypeEnum` are considered to be the same field.) `itxn_field`
-fails immediately for unsupported fields, unsupported transaction
-types, or improperly typed values for a particular field. `itxn_field`
-makes aceptance decisions entirely from the field and value provided,
-never considering previously set fields. Illegal interactions between
-fields, such as setting fields that belong to two different
-transaction types, are rejected by `itxn_submit`.
-
-| Op | Description |
-| --- | --- |
+setting is used when `itxn_submit` executes. For this purpose `Type`
+and `TypeEnum` are considered to be the same field. When using
+`itxn_field` to set an array field (`ApplicationArgs` `Accounts`,
+`Assets`, or `Applications`) each use adds an element to the end of
+the the array, rather than setting the entire array at once.
+
+`itxn_field` fails immediately for unsupported fields, unsupported
+transaction types, or improperly typed values for a particular
+field. `itxn_field` makes aceptance decisions entirely from the field
+and value provided, never considering previously set fields. Illegal
+interactions between fields, such as setting fields that belong to two
+different transaction types, are rejected by `itxn_submit`.
+
+| Opcode | Description |
+| - | -- |
| `itxn_begin` | begin preparation of a new inner transaction in a new transaction group |
| `itxn_next` | begin preparation of a new inner transaction in the same transaction group |
-| `itxn_field f` | set field F of the current inner transaction to X |
-| `itxn_submit` | execute the current inner transaction group. Fail if executing this group would exceed 16 total inner transactions, or if any transaction in the group fails. |
-| `itxn f` | push field F of the last inner transaction to stack |
-| `itxna f i` | push Ith value of the array field F of the last inner transaction to stack |
+| `itxn_field f` | set field F of the current inner transaction to A |
+| `itxn_submit` | execute the current inner transaction group. Fail if executing this group would exceed the inner transaction limit, or if any transaction in the group fails. |
+| `itxn f` | field F of the last inner transaction |
+| `itxna f i` | Ith value of the array field F of the last inner transaction |
+| `gitxn t f` | field F of the Tth transaction in the last inner group submitted |
+| `gitxna t f i` | Ith value of the array field F from the Tth transaction in the last inner group submitted |
# Assembler Syntax
-The assembler parses line by line. Ops that just use the stack appear on a line by themselves. Ops that take arguments are the op and then whitespace and then any argument or arguments.
+The assembler parses line by line. Ops that only take stack arguments
+appear on a line by themselves. Immediate arguments follow the opcode
+on the same line, separated by whitespace.
-The first line may contain a special version pragma `#pragma version X`, which directs the assembler to generate TEAL bytecode targeting a certain version. For instance, `#pragma version 2` produces bytecode targeting TEAL v2. By default, the assembler targets TEAL v1.
+The first line may contain a special version pragma `#pragma version X`, which directs the assembler to generate AVM bytecode targeting a certain version. For instance, `#pragma version 2` produces bytecode targeting TEAL v2. By default, the assembler targets TEAL v1.
Subsequent lines may contain other pragma declarations (i.e., `#pragma <some-specification>`), pertaining to checks that the assembler should perform before agreeing to emit the program bytes, specific optimizations, etc. Those declarations are optional and cannot alter the semantics as described in this document.
@@ -487,7 +600,7 @@ Subsequent lines may contain other pragma declarations (i.e., `#pragma <some-spe
## Constants and Pseudo-Ops
-A few pseudo-ops simplify writing code. `int` and `byte` and `addr` and `method` followed by a constant record the constant to a `intcblock` or `bytecblock` at the beginning of code and insert an `intc` or `bytec` reference where the instruction appears to load that value. `addr` parses an Algorand account address base32 and converts it to a regular bytes constant. `method` is passed a method signature and takes the first 4 bytes of the hash to convert it to the standard method selector defined in [ARC4](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0004.md)
+A few pseudo-ops simplify writing code. `int` and `byte` and `addr` and `method` followed by a constant record the constant to a `intcblock` or `bytecblock` at the beginning of code and insert an `intc` or `bytec` reference where the instruction appears to load that value. `addr` parses an Algorand account address base32 and converts it to a regular bytes constant. `method` is passed a method signature and takes the first four bytes of the hash to convert it to the standard method selector defined in [ARC4](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0004.md)
`byte` constants are:
```
@@ -504,7 +617,8 @@ byte "\x01\x02"
byte "string literal"
```
-`int` constants may be `0x` prefixed for hex, `0` prefixed for octal, or decimal numbers.
+`int` constants may be `0x` prefixed for hex, `0o` or `0` prefixed for
+octal, `0b` for binary, or decimal numbers.
`intcblock` may be explicitly assembled. It will conflict with the assembler gathering `int` pseudo-ops into a `intcblock` program prefix, but may be used if code only has explicit `intc` references. `intcblock` should be followed by space separated int constants all on one line.
@@ -512,7 +626,7 @@ byte "string literal"
## Labels and Branches
-A label is defined by any string not some other op or keyword and ending in ':'. A label can be an argument (without the trailing ':') to a branch instruction.
+A label is defined by any string not some other opcode or keyword and ending in ':'. A label can be an argument (without the trailing ':') to a branching instruction.
Example:
```
@@ -525,30 +639,57 @@ pop
# Encoding and Versioning
-A program starts with a varuint declaring the version of the compiled code. Any addition, removal, or change of opcode behavior increments the version. For the most part opcode behavior should not change, addition will be infrequent (not likely more often than every three months and less often as the language matures), and removal should be very rare.
+A compiled program starts with a varuint declaring the version of the compiled code. Any addition, removal, or change of opcode behavior increments the version. For the most part opcode behavior should not change, addition will be infrequent (not likely more often than every three months and less often as the language matures), and removal should be very rare.
For version 1, subsequent bytes after the varuint are program opcode bytes. Future versions could put other metadata following the version identifier.
-It is important to prevent newly-introduced transaction fields from breaking assumptions made by older versions of TEAL. If one of the transactions in a group will execute a TEAL program whose version predates a given field, that field must not be set anywhere in the transaction group, or the group will be rejected. For example, executing a TEAL version 1 program on a transaction with RekeyTo set to a nonzero address will cause the program to fail, regardless of the other contents of the program itself.
+It is important to prevent newly-introduced transaction fields from
+breaking assumptions made by older versions of the AVM. If one of the
+transactions in a group will execute a program whose version predates
+a given field, that field must not be set anywhere in the transaction
+group, or the group will be rejected. For example, executing a version
+1 program on a transaction with RekeyTo set to a nonzero address will
+cause the program to fail, regardless of the other contents of the
+program itself.
This requirement is enforced as follows:
-* For every transaction, compute the earliest TEAL version that supports all the fields and and values in this transaction. For example, a transaction with a nonzero RekeyTo field will have version (at least) 2.
+* For every transaction, compute the earliest version that supports
+ all the fields and values in this transaction. For example, a
+ transaction with a nonzero RekeyTo field will be (at least) v2.
-* Compute the largest version number across all the transactions in a group (of size 1 or more), call it `maxVerNo`. If any transaction in this group has a TEAL program with a version smaller than `maxVerNo`, then that TEAL program will fail.
+* Compute the largest version number across all the transactions in a group (of size 1 or more), call it `maxVerNo`. If any transaction in this group has a program with a version smaller than `maxVerNo`, then that TEAL program will fail.
+
+In addition, applications must be version 6 or greater to be eligible
+for calling in an inner transaction.
## Varuint
A '[proto-buf style variable length unsigned int](https://developers.google.com/protocol-buffers/docs/encoding#varint)' is encoded with 7 data bits per byte and the high bit is 1 if there is a following byte and 0 for the last byte. The lowest order 7 bits are in the first byte, followed by successively higher groups of 7 bits.
-# What TEAL Cannot Do
-
-Design and implementation limitations to be aware of with various versions of TEAL.
-
-* Stateless TEAL cannot lookup balances of Algos or other assets. (Standard transaction accounting will apply after TEAL has run and authorized a transaction. A TEAL-approved transaction could still be invalid by other accounting rules just as a standard signed transaction could be invalid. e.g. I can't give away money I don't have.)
-* TEAL cannot access information in previous blocks. TEAL cannot access most information in other transactions in the current block. (TEAL can access fields of the transaction it is attached to and the transactions in an atomic transaction group.)
-* TEAL cannot know exactly what round the current transaction will commit in (but it is somewhere in FirstValid through LastValid).
-* TEAL cannot know exactly what time its transaction is committed.
-* TEAL cannot loop prior to v4. In v3 and prior, the branch instructions `bnz` "branch if not zero", `bz` "branch if zero" and `b` "branch" can only branch forward so as to skip some code.
-* Until v4, TEAL had no notion of subroutines (and therefore no recursion). As of v4, use `callsub` and `retsub`.
-* TEAL cannot make indirect jumps. `b`, `bz`, `bnz`, and `callsub` jump to an immediately specified address, and `retsub` jumps to the address currently on the top of the call stack, which is manipulated only by previous calls to `callsub`.
+# What AVM Programs Cannot Do
+
+Design and implementation limitations to be aware of with various versions.
+
+* Stateless programs cannot lookup balances of Algos or other
+ assets. (Standard transaction accounting will apply after the Smart
+ Signature has authorized a transaction. A transaction could still be
+ invalid by other accounting rules just as a standard signed
+ transaction could be invalid. e.g. I can't give away money I don't
+ have.)
+* Programs cannot access information in previous blocks. Programs
+ cannot access information in other transactions in the current
+ block, unless they are a part of the same atomic transaction group.
+* Smart Signatures cannot know exactly what round the current transaction
+ will commit in (but it is somewhere in FirstValid through
+ LastValid).
+* Programs cannot know exactly what time its transaction is committed.
+* Programs cannot loop prior to v4. In v3 and prior, the branch
+ instructions `bnz` "branch if not zero", `bz` "branch if zero" and
+ `b` "branch" can only branch forward.
+* Until v4, the AVM had no notion of subroutines (and therefore no
+ recursion). As of v4, use `callsub` and `retsub`.
+* Programs cannot make indirect jumps. `b`, `bz`, `bnz`, and `callsub`
+ jump to an immediately specified address, and `retsub` jumps to the
+ address currently on the top of the call stack, which is manipulated
+ only by previous calls to `callsub` and `retsub`.
diff --git a/data/transactions/logic/README_in.md b/data/transactions/logic/README_in.md
index 5d4e4084e..4b3ef750e 100644
--- a/data/transactions/logic/README_in.md
+++ b/data/transactions/logic/README_in.md
@@ -1,59 +1,144 @@
-# Transaction Execution Approval Language (TEAL)
-
-TEAL is a bytecode based stack language that executes inside Algorand transactions. TEAL programs can be used to check the parameters of the transaction and approve the transaction as if by a signature. This use of TEAL is called a _LogicSig_. Starting with v2, TEAL programs may
-also execute as _Applications_ which are invoked with explicit application call transactions. Programs have read-only access to the transaction they are attached to, transactions in their atomic transaction group, and a few global values. In addition, _Application_ programs have access to limited state that is global to the application and per-account local state for each account that has opted-in to the application. For both types of program, approval is signaled by finishing with the stack containing a single non-zero uint64 value.
+# The Algorand Virtual Machine (AVM) and TEAL.
+
+The AVM is a bytecode based stack interpreter that executes programs
+asscoiated with Algorand transactions. TEAL is an assembly language
+syntax for specifying a program that is ultimately converted to AVM
+bytecode. These programs can be used to check the parameters of the
+transaction and approve the transaction as if by a signature. This use
+is called a _Smart Signature_. Starting with v2, these programs may
+also execute as _Smart Contracts_, which are often called
+_Applications_. Contract executions are invoked with explicit
+application call transactions.
+
+Programs have read-only access to the transaction they are attached
+to, the other transactions in their atomic transaction group, and a
+few global values. In addition, _Smart Contracts_ have access to
+limited state that is global to the application and per-account local
+state for each account that has opted-in to the application. For both
+types of program, approval is signaled by finishing with the stack
+containing a single non-zero uint64 value.
## The Stack
-The stack starts empty and contains values of either uint64 or bytes
-(`bytes` are implemented in Go as a []byte slice and may not exceed
+The stack starts empty and can contain values of either uint64 or byte-arrays
+(byte-arrays may not exceed
4096 bytes in length). Most operations act on the stack, popping
-arguments from it and pushing results to it.
+arguments from it and pushing results to it. Some operations have
+_immediate_ arguments that are encoded directly into the instruction,
+rather than coming from the stack.
-The maximum stack depth is currently 1000. If the stack depth is
-exceed or if a `bytes` element exceed 4096 bytes, the program fails.
+The maximum stack depth is 1000. If the stack depth is
+exceeded or if a byte-array element exceed 4096 bytes, the program fails.
## Scratch Space
-In addition to the stack there are 256 positions of scratch space,
-also uint64-bytes union values, each initialized as uint64
-zero. Scratch space is acccesed by the `load(s)` and `store(s)` ops
-moving data from or to scratch space, respectively.
+In addition to the stack there are 256 positions of scratch
+space. Like stack values, scratch locations may be uint64s or
+byte-arrays. Scratch locations are intialized as uint64 zero. Scratch
+space is acccsed by the `load(s)` and `store(s)` opcodes which move
+data from or to scratch space, respectively.
+
+## Versions
+
+In order to maintain existing semantics for previously written
+programs, AVM code is versioned. When new opcodes are introduced, or
+behavior is changed, a new version is introduced. Programs carrying
+old versions are executed with their original semantics. In the AVM
+bytecode, the version is an incrementing integer, currently 6, and
+denoted vX throughout this document. User friendly version numbers
+that correspond to programmer expectations, such as `AVM 1.0` map to
+these integers. AVM 0.9 is v4. AVM 1.0 is v5. AVM 1.1 is v6.
## Execution Modes
-Starting from version 2 TEAL evaluator can run programs in two modes:
-1. LogicSig (stateless)
-2. Application run (stateful)
+Starting from v2, the AVM can run programs in two modes:
+1. LogicSig or _stateless_ mode, used to execute Smart Signatures
+2. Application or _stateful_ mode, used to execute Smart Contracts
Differences between modes include:
1. Max program length (consensus parameters LogicSigMaxSize, MaxAppTotalProgramLen & MaxExtraAppProgramPages)
2. Max program cost (consensus parameters LogicSigMaxCost, MaxAppProgramCost)
-3. Opcode availability. For example, all stateful operations are only available in stateful mode. Refer to [opcodes document](TEAL_opcodes.md) for details.
+3. Opcode availability. Refer to [opcodes document](TEAL_opcodes.md) for details.
+4. Some global values, such as LatestTimestamp, are only available in stateful mode.
+5. Only Applications can observe transaction effects, such as Logs or IDs allocated to ASAs or new Applications.
+
+## Execution Environment for Smart Signatures
+
+Smart Signatures execute as part of testing a proposed transaction to
+see if it is valid and authorized to be committed into a block. If an
+authorized program executes and finishes with a single non-zero uint64
+value on the stack then that program has validated the transaction it
+is attached to.
+
+The program has access to data from the transaction it is attached to
+(`txn` op), any transactions in a transaction group it is part of
+(`gtxn` op), and a few global values like consensus parameters
+(`global` op). Some "Args" may be attached to a transaction being
+validated by a program. Args are an array of byte strings. A common
+pattern would be to have the key to unlock some contract as an Arg. Be
+aware that Smart Signature Args are recorded on the blockchain and
+publicly visible when the transaction is submitted to the network,
+even before the transaction has been included in a block. These Args
+are _not_ part of the transaction ID nor of the TxGroup hash. They
+also cannot be read from other programs in the group of transactions.
-## Execution Environment for LogicSigs
+A program can either authorize some delegated action on a normal private key signed or multisig account or be wholly in charge of a contract account.
-TEAL LogicSigs run in Algorand nodes as part of testing a proposed transaction to see if it is valid and authorized to be committed into a block.
+* If the account has signed the program (an ed25519 signature on "Program" concatenated with the program bytecode) then if the program returns true the transaction is authorized as if the account had signed it. This allows an account to hand out a signed program so that other users can carry out delegated actions which are approved by the program. Note that Smart Signature Args are _not_ signed.
-If an authorized program executes and finishes with a single non-zero uint64 value on the stack then that program has validated the transaction it is attached to.
+* If the SHA512_256 hash of the program (prefixed by "Program") is equal to the transaction Sender address then this is a contract account wholly controlled by the program. No other signature is necessary or possible. The only way to execute a transaction against the contract account is for the program to approve it.
-The TEAL program has access to data from the transaction it is attached to (`txn` op), any transactions in a transaction group it is part of (`gtxn` op), and a few global values like consensus parameters (`global` op). Some "Args" may be attached to a transaction being validated by a TEAL program. Args are an array of byte strings. A common pattern would be to have the key to unlock some contract as an Arg. Args are recorded on the blockchain and publicly visible when the transaction is submitted to the network. These LogicSig Args are _not_ part of the transaction ID nor of the TxGroup hash. They also cannot be read from other TEAL programs in the group of transactions.
+The bytecode plus the length of all Args must add up to no more than 1000 bytes (consensus parameter LogicSigMaxSize). Each opcode has an associated cost and the program cost must total no more than 20,000 (consensus parameter LogicSigMaxCost). Most opcodes have a cost of 1, but a few slow cryptographic operations are much higher. Prior to v4, the program's cost was estimated as the static sum of all the opcode costs in the program (whether they were actually executed or not). Beginning with v4, the program's cost is tracked dynamically, while being evaluated. If the program exceeds its budget, it fails.
-A program can either authorize some delegated action on a normal private key signed or multisig account or be wholly in charge of a contract account.
+## Execution Environment for Smart Contracts (Applications)
-* If the account has signed the program (an ed25519 signature on "Program" concatenated with the program bytes) then if the program returns true the transaction is authorized as if the account had signed it. This allows an account to hand out a signed program so that other users can carry out delegated actions which are approved by the program. Note that LogicSig Args are _not_ signed.
+Smart Contracts are executed in ApplicationCall transactions. Like
+Smart Signatures, contracts indicate success by leaving a single
+non-zero integer on the stack. A failed smart contract call is not a
+valid transaction, thus not written to the blockchain. Nodes maintain
+a list of transactions that would succeed, given the current state of
+the blockchain, called the transaction pool. Nodes draw from the pool
+if they are called upon to propose a block.
-* If the SHA512_256 hash of the program (prefixed by "Program") is equal to the transaction Sender address then this is a contract account wholly controlled by the program. No other signature is necessary or possible. The only way to execute a transaction against the contract account is for the program to approve it.
+Smart Contracts have access to everything a Smart Signature may access
+(see previous section), as well as the ability to examine blockchain
+state such as balances and contract state (their own state and the
+state of other contracts). They also have access to some global
+values that are not visible to Smart Signatures because the values
+change over time. Since smart contracts access changing state, nodes
+must rerun their code to determine if the ApplicationCall transactions
+in their pool would still succeed each time a block is added to the
+blockchain.
+
+### Resource availability
+
+Smart contracts have limits on their execution budget (700, consensus
+paramter MaxAppProgramCost), and the amount of blockchain state they
+may examine. Opcodes may only access blockchain resources such as
+Accounts, Assets, and contract state if the given resource is
+_available_.
+
+ * A resource in the "foreign array" fields of the ApplicationCall
+ transaction (`txn.Accounts`, `txn.ForeignAssets`, and
+ `txn.ForeignApplications`) is _available_.
+
+ * The `global CurrentApplicationID` and `txn.Sender` are _available_.
+
+ * Prior to v4, all assets were considered _available_ to the
+ `asset_holding_get` opcode.
-The TEAL bytecode plus the length of all Args must add up to no more than 1000 bytes (consensus parameter LogicSigMaxSize). Each TEAL op has an associated cost and the program cost must total no more than 20000 (consensus parameter LogicSigMaxCost). Most ops have a cost of 1, but a few slow crypto ops are much higher. Prior to v4, the program's cost was estimated as the static sum of all the opcode costs in the program (whether they were actually executed or not). Beginning with v4, the program's cost is tracked dynamically, while being evaluated. If the program exceeds its budget, it fails.
+ * Since v6, any asset or contract that was created earlier in the
+ same transaction group is _available_. In addition, any account
+ that is the contract account of a contract that was created earlier
+ in the group is _available_.
## Constants
-Constants are loaded into the environment into storage separate from the stack. They can then be pushed onto the stack by referring to the type and index. This makes for efficient re-use of byte constants used for account addresses, etc. Constants that are not reused can be pushed with `pushint` or `pushbytes`.
+Constants are loaded into storage separate from the stack and scratch space. They can then be pushed onto the stack by referring to the type and index. This makes for efficient re-use of byte constants used for account addresses, etc. Constants that are not reused can be pushed with `pushint` or `pushbytes`.
The assembler will hide most of this, allowing simple use of `int 1234` and `byte 0xcafed00d`. These constants will automatically get assembled into int and byte pages of constants, de-duplicated, and operations to load them from constant storage space inserted.
-Constants are loaded into the environment by two opcodes, `intcblock` and `bytecblock`. Both of these use [proto-buf style variable length unsigned int](https://developers.google.com/protocol-buffers/docs/encoding#varint), reproduced [here](#varuint). The `intcblock` opcode is followed by a varuint specifying the length of the array and then that number of varuint. The `bytecblock` opcode is followed by a varuint array length then that number of pairs of (varuint, bytes) length prefixed byte strings. This should efficiently load 32 and 64 byte constants which will be common as addresses, hashes, and signatures.
+Constants are prepared by two opcodes, `intcblock` and `bytecblock`. Both of these use [proto-buf style variable length unsigned int](https://developers.google.com/protocol-buffers/docs/encoding#varint), reproduced [here](#varuint). The `intcblock` opcode is followed by a varuint specifying the length of the array and then that number of varuint. The `bytecblock` opcode is followed by a varuint array length then that number of pairs of (varuint, bytes) length prefixed byte strings.
Constants are pushed onto the stack by `intc`, `intc_[0123]`, `pushint`, `bytec`, `bytec_[0123]`, and `pushbytes`. The assembler will handle converting `int N` or `byte N` into the appropriate form of the instruction needed.
@@ -63,43 +148,44 @@ Constants are pushed onto the stack by `intc`, `intc_[0123]`, `pushint`, `bytec`
## Operations
-Most operations work with only one type of argument, uint64 or bytes, and panic if the wrong type value is on the stack.
+Most operations work with only one type of argument, uint64 or bytes, and fail if the wrong type value is on the stack.
-Many instructions accept values to designate Accounts, Assets, or Applications. Beginning with TEAL v4, these values may always be given as an _offset_ in the corresponding Txn fields (Txn.Accounts, Txn.ForeignAssets, Txn.ForeignApps) _or_ as the value itself (a bytes address for Accounts, or a uint64 ID). The values, however, must still be present in the Txn fields. Before TEAL v4, most opcodes required the use of an offset, except for reading account local values of assets or applications, which accepted the IDs directly and did not require the ID to be present in they corresponding _Foreign_ array. (Note that beginning with TEAL v4, those IDs are required to be present in their corresponding _Foreign_ array.) See individual opcodes for details. In the case of account offsets or application offsets, 0 is specially defined to Txn.Sender or the ID of the current application, respectively.
-
-Many programs need only a few dozen instructions. The instruction set has some optimization built in. `intc`, `bytec`, and `arg` take an immediate value byte, making a 2-byte op to load a value onto the stack, but they also have single byte versions for loading the most common constant values. Any program will benefit from having a few common values loaded with a smaller one byte opcode. Cryptographic hashes and `ed25519verify` are single byte opcodes with powerful libraries behind them. These operations still take more time than other ops (and this is reflected in the cost of each op and the cost limit of a program) but are efficient in compiled code space.
+Many instructions accept values to designate Accounts, Assets, or Applications. Beginning with v4, these values may be given as an _offset_ in the corresponding Txn fields (Txn.Accounts, Txn.ForeignAssets, Txn.ForeignApps) _or_ as the value itself (a byte-array address for Accounts, or a uint64 ID). The values, however, must still be present in the Txn fields. Before v4, most opcodes required the use of an offset, except for reading account local values of assets or applications, which accepted the IDs directly and did not require the ID to be present in they corresponding _Foreign_ array. (Note that beginning with v4, those IDs _are_ required to be present in their corresponding _Foreign_ array.) See individual opcodes for details. In the case of account offsets or application offsets, 0 is specially defined to Txn.Sender or the ID of the current application, respectively.
This summary is supplemented by more detail in the [opcodes document](TEAL_opcodes.md).
-Some operations 'panic' and immediately fail the program.
-A transaction checked by a program that panics is not valid.
-A contract account governed by a buggy program might not have a way to get assets back out of it. Code carefully.
-
-### Arithmetic, Logic, and Cryptographic Operations
+Some operations immediately fail the program.
+A transaction checked by a program that fails is not valid.
+An account governed by a buggy program might not have a way to get assets back out of it. Code carefully.
-For one-argument ops, `X` is the last element on the stack, which is typically replaced by a new value.
+In the documentation for each opcode, the stack arguments that are
+popped are referred to alphabetically, beginning with the deepest
+argument as `A`. These arguments are shown in the opcode description,
+and if the opcode must be of a specific type, it is noted there. All
+opcodes fail if a specified type is incorrect.
-For two-argument ops, `A` is the penultimate element on the stack and `B` is the top of the stack. These typically result in popping A and B from the stack and pushing the result.
+If an opcode pushes more than one result, the values are named for
+ease of exposition and clarity concerning their stack positions. When
+an opcode manipulates the stack in such a way that a value changes
+position but is otherwise unchanged, the name of the output on the
+return stack matches the name of the input value.
-For three-argument ops, `A` is the element two below the top, `B` is the penultimate stack element and `C` is the top of the stack. These operations typically pop A, B, and C from the stack and push the result.
+### Arithmetic, Logic, and Cryptographic Operations
@@ Arithmetic.md @@
-These opcodes return portions of byte arrays, accessed by position, in
-various sizes.
-
### Byte Array Manipulation
@@ Byte_Array_Manipulation.md @@
-These opcodes take byte-array values that are interpreted as
+The following opcodes take byte-array values that are interpreted as
big-endian unsigned integers. For mathematical operators, the
returned values are the shortest byte-array that can represent the
returned value. For example, the zero value is the empty
-byte-array. For comparison operators, the returned value is a uint64
+byte-array. For comparison operators, the returned value is a uint64.
-Input lengths are limited to a maximum length 64 bytes, which
-represents a 512 bit unsigned integer. Output lengths are not
+Input lengths are limited to a maximum length of 64 bytes,
+representing a 512 bit unsigned integer. Output lengths are not
explicitly restricted, though only `b*` and `b+` can produce a larger
output than their inputs, so there is an implicit length limit of 128
bytes on outputs.
@@ -107,9 +193,9 @@ bytes on outputs.
@@ Byte_Array_Arithmetic.md @@
These opcodes operate on the bits of byte-array values. The shorter
-array is interpreted as though left padded with zeros until it is the
+input array is interpreted as though left padded with zeros until it is the
same length as the other input. The returned values are the same
-length as the longest input. Therefore, unlike array arithmetic,
+length as the longer input. Therefore, unlike array arithmetic,
these results may contain leading zero bytes.
@@ Byte_Array_Logic.md @@
@@ -148,6 +234,12 @@ App fields used in the `app_params_get` opcode.
@@ app_params_fields.md @@
+**Account Fields**
+
+Account fields used in the `acct_params_get` opcode.
+
+@@ acct_params_fields.md @@
+
### Flow Control
@@ Flow_Control.md @@
@@ -164,43 +256,50 @@ of a true top-level transaction, programatically. However, they are
different in significant ways. The most important differences are
that they are not signed, duplicates are not rejected, and they do not
appear in the block in the usual away. Instead, their effects are
-noted in metadata associated with the associated top-level application
+noted in metadata associated with their top-level application
call transaction. An inner transaction's `Sender` must be the
SHA512_256 hash of the application ID (prefixed by "appID"), or an
account that has been rekeyed to that hash.
-Currently, inner transactions may perform `pay`, `axfer`, `acfg`, and
+In v5, inner transactions may perform `pay`, `axfer`, `acfg`, and
`afrz` effects. After executing an inner transaction with
`itxn_submit`, the effects of the transaction are visible begining
with the next instruction with, for example, `balance` and
-`min_balance` checks.
+`min_balance` checks. In v6, inner transactions may also perform
+`keyreg` and `appl` effects.
-Of the transaction Header fields, only a few fields may be set:
-`Type`/`TypeEnum`, `Sender`, and `Fee`. For the specific fields of
-each transaction types, any field, except `RekeyTo` may be set. This
-allows, for example, clawback transactions, asset opt-ins, and asset
-creates in addtion to the more common uses of `axfer` and `acfg`. All
-fields default to the zero value, except those described under
-`itxn_begin`.
+In v5, only a few of the Header fields may be set: `Type`/`TypeEnum`,
+`Sender`, and `Fee`. In v6, Header fields `Note` and `RekeyTo` may
+also be set. For the specific (non-header) fields of each transaction
+type, any field may be set. This allows, for example, clawback
+transactions, asset opt-ins, and asset creates in addition to the more
+common uses of `axfer` and `acfg`. All fields default to the zero
+value, except those described under `itxn_begin`.
Fields may be set multiple times, but may not be read. The most recent
-setting is used when `itxn_submit` executes. (For this purpose `Type`
-and `TypeEnum` are considered to be the same field.) `itxn_field`
-fails immediately for unsupported fields, unsupported transaction
-types, or improperly typed values for a particular field. `itxn_field`
-makes aceptance decisions entirely from the field and value provided,
-never considering previously set fields. Illegal interactions between
-fields, such as setting fields that belong to two different
-transaction types, are rejected by `itxn_submit`.
+setting is used when `itxn_submit` executes. For this purpose `Type`
+and `TypeEnum` are considered to be the same field. When using
+`itxn_field` to set an array field (`ApplicationArgs` `Accounts`,
+`Assets`, or `Applications`) each use adds an element to the end of
+the the array, rather than setting the entire array at once.
+
+`itxn_field` fails immediately for unsupported fields, unsupported
+transaction types, or improperly typed values for a particular
+field. `itxn_field` makes aceptance decisions entirely from the field
+and value provided, never considering previously set fields. Illegal
+interactions between fields, such as setting fields that belong to two
+different transaction types, are rejected by `itxn_submit`.
@@ Inner_Transactions.md @@
# Assembler Syntax
-The assembler parses line by line. Ops that just use the stack appear on a line by themselves. Ops that take arguments are the op and then whitespace and then any argument or arguments.
+The assembler parses line by line. Ops that only take stack arguments
+appear on a line by themselves. Immediate arguments follow the opcode
+on the same line, separated by whitespace.
-The first line may contain a special version pragma `#pragma version X`, which directs the assembler to generate TEAL bytecode targeting a certain version. For instance, `#pragma version 2` produces bytecode targeting TEAL v2. By default, the assembler targets TEAL v1.
+The first line may contain a special version pragma `#pragma version X`, which directs the assembler to generate AVM bytecode targeting a certain version. For instance, `#pragma version 2` produces bytecode targeting TEAL v2. By default, the assembler targets TEAL v1.
Subsequent lines may contain other pragma declarations (i.e., `#pragma <some-specification>`), pertaining to checks that the assembler should perform before agreeing to emit the program bytes, specific optimizations, etc. Those declarations are optional and cannot alter the semantics as described in this document.
@@ -208,7 +307,7 @@ Subsequent lines may contain other pragma declarations (i.e., `#pragma <some-spe
## Constants and Pseudo-Ops
-A few pseudo-ops simplify writing code. `int` and `byte` and `addr` and `method` followed by a constant record the constant to a `intcblock` or `bytecblock` at the beginning of code and insert an `intc` or `bytec` reference where the instruction appears to load that value. `addr` parses an Algorand account address base32 and converts it to a regular bytes constant. `method` is passed a method signature and takes the first 4 bytes of the hash to convert it to the standard method selector defined in [ARC4](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0004.md)
+A few pseudo-ops simplify writing code. `int` and `byte` and `addr` and `method` followed by a constant record the constant to a `intcblock` or `bytecblock` at the beginning of code and insert an `intc` or `bytec` reference where the instruction appears to load that value. `addr` parses an Algorand account address base32 and converts it to a regular bytes constant. `method` is passed a method signature and takes the first four bytes of the hash to convert it to the standard method selector defined in [ARC4](https://github.com/algorandfoundation/ARCs/blob/main/ARCs/arc-0004.md)
`byte` constants are:
```
@@ -225,7 +324,8 @@ byte "\x01\x02"
byte "string literal"
```
-`int` constants may be `0x` prefixed for hex, `0` prefixed for octal, or decimal numbers.
+`int` constants may be `0x` prefixed for hex, `0o` or `0` prefixed for
+octal, `0b` for binary, or decimal numbers.
`intcblock` may be explicitly assembled. It will conflict with the assembler gathering `int` pseudo-ops into a `intcblock` program prefix, but may be used if code only has explicit `intc` references. `intcblock` should be followed by space separated int constants all on one line.
@@ -233,7 +333,7 @@ byte "string literal"
## Labels and Branches
-A label is defined by any string not some other op or keyword and ending in ':'. A label can be an argument (without the trailing ':') to a branch instruction.
+A label is defined by any string not some other opcode or keyword and ending in ':'. A label can be an argument (without the trailing ':') to a branching instruction.
Example:
```
@@ -246,30 +346,57 @@ pop
# Encoding and Versioning
-A program starts with a varuint declaring the version of the compiled code. Any addition, removal, or change of opcode behavior increments the version. For the most part opcode behavior should not change, addition will be infrequent (not likely more often than every three months and less often as the language matures), and removal should be very rare.
+A compiled program starts with a varuint declaring the version of the compiled code. Any addition, removal, or change of opcode behavior increments the version. For the most part opcode behavior should not change, addition will be infrequent (not likely more often than every three months and less often as the language matures), and removal should be very rare.
For version 1, subsequent bytes after the varuint are program opcode bytes. Future versions could put other metadata following the version identifier.
-It is important to prevent newly-introduced transaction fields from breaking assumptions made by older versions of TEAL. If one of the transactions in a group will execute a TEAL program whose version predates a given field, that field must not be set anywhere in the transaction group, or the group will be rejected. For example, executing a TEAL version 1 program on a transaction with RekeyTo set to a nonzero address will cause the program to fail, regardless of the other contents of the program itself.
+It is important to prevent newly-introduced transaction fields from
+breaking assumptions made by older versions of the AVM. If one of the
+transactions in a group will execute a program whose version predates
+a given field, that field must not be set anywhere in the transaction
+group, or the group will be rejected. For example, executing a version
+1 program on a transaction with RekeyTo set to a nonzero address will
+cause the program to fail, regardless of the other contents of the
+program itself.
This requirement is enforced as follows:
-* For every transaction, compute the earliest TEAL version that supports all the fields and and values in this transaction. For example, a transaction with a nonzero RekeyTo field will have version (at least) 2.
+* For every transaction, compute the earliest version that supports
+ all the fields and values in this transaction. For example, a
+ transaction with a nonzero RekeyTo field will be (at least) v2.
+
+* Compute the largest version number across all the transactions in a group (of size 1 or more), call it `maxVerNo`. If any transaction in this group has a program with a version smaller than `maxVerNo`, then that TEAL program will fail.
-* Compute the largest version number across all the transactions in a group (of size 1 or more), call it `maxVerNo`. If any transaction in this group has a TEAL program with a version smaller than `maxVerNo`, then that TEAL program will fail.
+In addition, applications must be version 6 or greater to be eligible
+for calling in an inner transaction.
## Varuint
A '[proto-buf style variable length unsigned int](https://developers.google.com/protocol-buffers/docs/encoding#varint)' is encoded with 7 data bits per byte and the high bit is 1 if there is a following byte and 0 for the last byte. The lowest order 7 bits are in the first byte, followed by successively higher groups of 7 bits.
-# What TEAL Cannot Do
-
-Design and implementation limitations to be aware of with various versions of TEAL.
-
-* Stateless TEAL cannot lookup balances of Algos or other assets. (Standard transaction accounting will apply after TEAL has run and authorized a transaction. A TEAL-approved transaction could still be invalid by other accounting rules just as a standard signed transaction could be invalid. e.g. I can't give away money I don't have.)
-* TEAL cannot access information in previous blocks. TEAL cannot access most information in other transactions in the current block. (TEAL can access fields of the transaction it is attached to and the transactions in an atomic transaction group.)
-* TEAL cannot know exactly what round the current transaction will commit in (but it is somewhere in FirstValid through LastValid).
-* TEAL cannot know exactly what time its transaction is committed.
-* TEAL cannot loop prior to v4. In v3 and prior, the branch instructions `bnz` "branch if not zero", `bz` "branch if zero" and `b` "branch" can only branch forward so as to skip some code.
-* Until v4, TEAL had no notion of subroutines (and therefore no recursion). As of v4, use `callsub` and `retsub`.
-* TEAL cannot make indirect jumps. `b`, `bz`, `bnz`, and `callsub` jump to an immediately specified address, and `retsub` jumps to the address currently on the top of the call stack, which is manipulated only by previous calls to `callsub`.
+# What AVM Programs Cannot Do
+
+Design and implementation limitations to be aware of with various versions.
+
+* Stateless programs cannot lookup balances of Algos or other
+ assets. (Standard transaction accounting will apply after the Smart
+ Signature has authorized a transaction. A transaction could still be
+ invalid by other accounting rules just as a standard signed
+ transaction could be invalid. e.g. I can't give away money I don't
+ have.)
+* Programs cannot access information in previous blocks. Programs
+ cannot access information in other transactions in the current
+ block, unless they are a part of the same atomic transaction group.
+* Smart Signatures cannot know exactly what round the current transaction
+ will commit in (but it is somewhere in FirstValid through
+ LastValid).
+* Programs cannot know exactly what time its transaction is committed.
+* Programs cannot loop prior to v4. In v3 and prior, the branch
+ instructions `bnz` "branch if not zero", `bz` "branch if zero" and
+ `b` "branch" can only branch forward.
+* Until v4, the AVM had no notion of subroutines (and therefore no
+ recursion). As of v4, use `callsub` and `retsub`.
+* Programs cannot make indirect jumps. `b`, `bz`, `bnz`, and `callsub`
+ jump to an immediately specified address, and `retsub` jumps to the
+ address currently on the top of the call stack, which is manipulated
+ only by previous calls to `callsub` and `retsub`.
diff --git a/data/transactions/logic/TEAL_opcodes.md b/data/transactions/logic/TEAL_opcodes.md
index 69a86492c..3a85afc72 100644
--- a/data/transactions/logic/TEAL_opcodes.md
+++ b/data/transactions/logic/TEAL_opcodes.md
@@ -6,45 +6,40 @@ Ops have a 'cost' of 1 unless otherwise specified.
## err
- Opcode: 0x00
-- Pops: _None_
-- Pushes: _None_
-- Error. Fail immediately. This is primarily a fencepost against accidental zero bytes getting compiled into programs.
+- Stack: ... &rarr; ...
+- Fail immediately.
## sha256
- Opcode: 0x01
-- Pops: *... stack*, []byte
-- Pushes: []byte
-- SHA256 hash of value X, yields [32]byte
+- Stack: ..., A: []byte &rarr; ..., []byte
+- SHA256 hash of value A, yields [32]byte
- **Cost**:
- - 7 (LogicSigVersion = 1)
- - 35 (LogicSigVersion >= 2)
+ - 7 (v1)
+ - 35 (since v2)
## keccak256
- Opcode: 0x02
-- Pops: *... stack*, []byte
-- Pushes: []byte
-- Keccak256 hash of value X, yields [32]byte
+- Stack: ..., A: []byte &rarr; ..., []byte
+- Keccak256 hash of value A, yields [32]byte
- **Cost**:
- - 26 (LogicSigVersion = 1)
- - 130 (LogicSigVersion >= 2)
+ - 26 (v1)
+ - 130 (since v2)
## sha512_256
- Opcode: 0x03
-- Pops: *... stack*, []byte
-- Pushes: []byte
-- SHA512_256 hash of value X, yields [32]byte
+- Stack: ..., A: []byte &rarr; ..., []byte
+- SHA512_256 hash of value A, yields [32]byte
- **Cost**:
- - 9 (LogicSigVersion = 1)
- - 45 (LogicSigVersion >= 2)
+ - 9 (v1)
+ - 45 (since v2)
## ed25519verify
- Opcode: 0x04
-- Pops: *... stack*, {[]byte A}, {[]byte B}, {[]byte C}
-- Pushes: uint64
+- Stack: ..., A: []byte, B: []byte, C: []byte &rarr; ..., uint64
- for (data A, signature B, pubkey C) verify the signature of ("ProgData" || program_hash || data) against the pubkey => {0 or 1}
- **Cost**: 1900
@@ -53,16 +48,15 @@ The 32 byte public key is the last element on the stack, preceded by the 64 byte
## ecdsa_verify v
- Opcode: 0x05 {uint8 curve index}
-- Pops: *... stack*, {[]byte A}, {[]byte B}, {[]byte C}, {[]byte D}, {[]byte E}
-- Pushes: uint64
+- Stack: ..., A: []byte, B: []byte, C: []byte, D: []byte, E: []byte &rarr; ..., uint64
- for (data A, signature B, C and pubkey D, E) verify the signature of the data against the pubkey => {0 or 1}
- **Cost**: 1700
-- LogicSigVersion >= 5
+- Availability: v5
`ECDSA` Curves:
| Index | Name | Notes |
-| --- | --- | --- |
+| - | ------ | --------- |
| 0 | Secp256k1 | secp256k1 curve |
@@ -71,16 +65,15 @@ The 32 byte Y-component of a public key is the last element on the stack, preced
## ecdsa_pk_decompress v
- Opcode: 0x06 {uint8 curve index}
-- Pops: *... stack*, []byte
-- Pushes: *... stack*, []byte, []byte
-- decompress pubkey A into components X, Y => [*... stack*, X, Y]
+- Stack: ..., A: []byte &rarr; ..., X: []byte, Y: []byte
+- decompress pubkey A into components X, Y
- **Cost**: 650
-- LogicSigVersion >= 5
+- Availability: v5
`ECDSA` Curves:
| Index | Name | Notes |
-| --- | --- | --- |
+| - | ------ | --------- |
| 0 | Secp256k1 | secp256k1 curve |
@@ -89,16 +82,15 @@ The 33 byte public key in a compressed form to be decompressed into X and Y (top
## ecdsa_pk_recover v
- Opcode: 0x07 {uint8 curve index}
-- Pops: *... stack*, {[]byte A}, {uint64 B}, {[]byte C}, {[]byte D}
-- Pushes: *... stack*, []byte, []byte
-- for (data A, recovery id B, signature C, D) recover a public key => [*... stack*, X, Y]
+- Stack: ..., A: []byte, B: uint64, C: []byte, D: []byte &rarr; ..., X: []byte, Y: []byte
+- for (data A, recovery id B, signature C, D) recover a public key
- **Cost**: 2000
-- LogicSigVersion >= 5
+- Availability: v5
`ECDSA` Curves:
| Index | Name | Notes |
-| --- | --- | --- |
+| - | ------ | --------- |
| 0 | Secp256k1 | secp256k1 curve |
@@ -107,8 +99,7 @@ S (top) and R elements of a signature, recovery id and data (bottom) are expecte
## +
- Opcode: 0x08
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A plus B. Fail on overflow.
Overflow is an error condition which halts execution and fails the transaction. Full precision is available from `addw`.
@@ -116,15 +107,13 @@ Overflow is an error condition which halts execution and fails the transaction.
## -
- Opcode: 0x09
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A minus B. Fail if B > A.
## /
- Opcode: 0x0a
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A divided by B (truncated division). Fail if B == 0.
`divmodw` is available to divide the two-element values produced by `mulw` and `addw`.
@@ -132,8 +121,7 @@ Overflow is an error condition which halts execution and fails the transaction.
## *
- Opcode: 0x0b
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A times B. Fail on overflow.
Overflow is an error condition which halts execution and fails the transaction. Full precision is available from `mulw`.
@@ -141,153 +129,134 @@ Overflow is an error condition which halts execution and fails the transaction.
## <
- Opcode: 0x0c
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A less than B => {0 or 1}
## >
- Opcode: 0x0d
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A greater than B => {0 or 1}
## <=
- Opcode: 0x0e
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A less than or equal to B => {0 or 1}
## >=
- Opcode: 0x0f
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A greater than or equal to B => {0 or 1}
## &&
- Opcode: 0x10
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A is not zero and B is not zero => {0 or 1}
## ||
- Opcode: 0x11
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A is not zero or B is not zero => {0 or 1}
## ==
- Opcode: 0x12
-- Pops: *... stack*, {any A}, {any B}
-- Pushes: uint64
+- Stack: ..., A, B &rarr; ..., uint64
- A is equal to B => {0 or 1}
## !=
- Opcode: 0x13
-- Pops: *... stack*, {any A}, {any B}
-- Pushes: uint64
+- Stack: ..., A, B &rarr; ..., uint64
- A is not equal to B => {0 or 1}
## !
- Opcode: 0x14
-- Pops: *... stack*, uint64
-- Pushes: uint64
-- X == 0 yields 1; else 0
+- Stack: ..., A: uint64 &rarr; ..., uint64
+- A == 0 yields 1; else 0
## len
- Opcode: 0x15
-- Pops: *... stack*, []byte
-- Pushes: uint64
-- yields length of byte value X
+- Stack: ..., A: []byte &rarr; ..., uint64
+- yields length of byte value A
## itob
- Opcode: 0x16
-- Pops: *... stack*, uint64
-- Pushes: []byte
-- converts uint64 X to big endian bytes
+- Stack: ..., A: uint64 &rarr; ..., []byte
+- converts uint64 A to big endian bytes
## btoi
- Opcode: 0x17
-- Pops: *... stack*, []byte
-- Pushes: uint64
-- converts bytes X as big endian to uint64
+- Stack: ..., A: []byte &rarr; ..., uint64
+- converts bytes A as big endian to uint64
`btoi` fails if the input is longer than 8 bytes.
## %
- Opcode: 0x18
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A modulo B. Fail if B == 0.
## |
- Opcode: 0x19
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A bitwise-or B
## &
- Opcode: 0x1a
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A bitwise-and B
## ^
- Opcode: 0x1b
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A bitwise-xor B
## ~
- Opcode: 0x1c
-- Pops: *... stack*, uint64
-- Pushes: uint64
-- bitwise invert value X
+- Stack: ..., A: uint64 &rarr; ..., uint64
+- bitwise invert value A
## mulw
- Opcode: 0x1d
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: *... stack*, uint64, uint64
-- A times B out to 128-bit long result as low (top) and high uint64 values on the stack
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., X: uint64, Y: uint64
+- A times B as a 128-bit result in two uint64s. X is the high 64 bits, Y is the low
## addw
- Opcode: 0x1e
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: *... stack*, uint64, uint64
-- A plus B out to 128-bit long result as sum (top) and carry-bit uint64 values on the stack
-- LogicSigVersion >= 2
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., X: uint64, Y: uint64
+- A plus B as a 128-bit result. X is the carry-bit, Y is the low-order 64 bits.
+- Availability: v2
## divmodw
- Opcode: 0x1f
-- Pops: *... stack*, {uint64 A}, {uint64 B}, {uint64 C}, {uint64 D}
-- Pushes: *... stack*, uint64, uint64, uint64, uint64
-- Pop four uint64 values. The deepest two are interpreted as a uint128 dividend (deepest value is high word), the top two are interpreted as a uint128 divisor. Four uint64 values are pushed to the stack. The deepest two are the quotient (deeper value is the high uint64). The top two are the remainder, low bits on top.
+- Stack: ..., A: uint64, B: uint64, C: uint64, D: uint64 &rarr; ..., W: uint64, X: uint64, Y: uint64, Z: uint64
+- W,X = (A,B / C,D); Y,Z = (A,B modulo C,D)
- **Cost**: 20
-- LogicSigVersion >= 4
+- Availability: v4
+
+The notation J,K indicates that two uint64 values J and K are interpreted as a uint128 value, with J as the high uint64 and K the low.
## intcblock uint ...
- Opcode: 0x20 {varuint length} [{varuint value}, ...]
-- Pops: _None_
-- Pushes: _None_
+- Stack: ... &rarr; ...
- prepare block of uint64 constants for use by intc
`intcblock` loads following program bytes into an array of integer constants in the evaluator. These integer constants can be referred to by `intc` and `intc_*` which will push the value onto the stack. Subsequent calls to `intcblock` reset and replace the integer constants available to the script.
@@ -295,43 +264,37 @@ Overflow is an error condition which halts execution and fails the transaction.
## intc i
- Opcode: 0x21 {uint8 int constant index}
-- Pops: _None_
-- Pushes: uint64
-- push Ith constant from intcblock to stack
+- Stack: ... &rarr; ..., uint64
+- Ith constant from intcblock
## intc_0
- Opcode: 0x22
-- Pops: _None_
-- Pushes: uint64
-- push constant 0 from intcblock to stack
+- Stack: ... &rarr; ..., uint64
+- constant 0 from intcblock
## intc_1
- Opcode: 0x23
-- Pops: _None_
-- Pushes: uint64
-- push constant 1 from intcblock to stack
+- Stack: ... &rarr; ..., uint64
+- constant 1 from intcblock
## intc_2
- Opcode: 0x24
-- Pops: _None_
-- Pushes: uint64
-- push constant 2 from intcblock to stack
+- Stack: ... &rarr; ..., uint64
+- constant 2 from intcblock
## intc_3
- Opcode: 0x25
-- Pops: _None_
-- Pushes: uint64
-- push constant 3 from intcblock to stack
+- Stack: ... &rarr; ..., uint64
+- constant 3 from intcblock
## bytecblock bytes ...
- Opcode: 0x26 {varuint length} [({varuint value length} bytes), ...]
-- Pops: _None_
-- Pushes: _None_
+- Stack: ... &rarr; ...
- prepare block of byte-array constants for use by bytec
`bytecblock` loads the following program bytes into an array of byte-array constants in the evaluator. These constants can be referred to by `bytec` and `bytec_*` which will push the value onto the stack. Subsequent calls to `bytecblock` reset and replace the bytes constants available to the script.
@@ -339,151 +302,140 @@ Overflow is an error condition which halts execution and fails the transaction.
## bytec i
- Opcode: 0x27 {uint8 byte constant index}
-- Pops: _None_
-- Pushes: []byte
-- push Ith constant from bytecblock to stack
+- Stack: ... &rarr; ..., []byte
+- Ith constant from bytecblock
## bytec_0
- Opcode: 0x28
-- Pops: _None_
-- Pushes: []byte
-- push constant 0 from bytecblock to stack
+- Stack: ... &rarr; ..., []byte
+- constant 0 from bytecblock
## bytec_1
- Opcode: 0x29
-- Pops: _None_
-- Pushes: []byte
-- push constant 1 from bytecblock to stack
+- Stack: ... &rarr; ..., []byte
+- constant 1 from bytecblock
## bytec_2
- Opcode: 0x2a
-- Pops: _None_
-- Pushes: []byte
-- push constant 2 from bytecblock to stack
+- Stack: ... &rarr; ..., []byte
+- constant 2 from bytecblock
## bytec_3
- Opcode: 0x2b
-- Pops: _None_
-- Pushes: []byte
-- push constant 3 from bytecblock to stack
+- Stack: ... &rarr; ..., []byte
+- constant 3 from bytecblock
## arg n
- Opcode: 0x2c {uint8 arg index N}
-- Pops: _None_
-- Pushes: []byte
-- push Nth LogicSig argument to stack
+- Stack: ... &rarr; ..., []byte
+- Nth LogicSig argument
- Mode: Signature
## arg_0
- Opcode: 0x2d
-- Pops: _None_
-- Pushes: []byte
-- push LogicSig argument 0 to stack
+- Stack: ... &rarr; ..., []byte
+- LogicSig argument 0
- Mode: Signature
## arg_1
- Opcode: 0x2e
-- Pops: _None_
-- Pushes: []byte
-- push LogicSig argument 1 to stack
+- Stack: ... &rarr; ..., []byte
+- LogicSig argument 1
- Mode: Signature
## arg_2
- Opcode: 0x2f
-- Pops: _None_
-- Pushes: []byte
-- push LogicSig argument 2 to stack
+- Stack: ... &rarr; ..., []byte
+- LogicSig argument 2
- Mode: Signature
## arg_3
- Opcode: 0x30
-- Pops: _None_
-- Pushes: []byte
-- push LogicSig argument 3 to stack
+- Stack: ... &rarr; ..., []byte
+- LogicSig argument 3
- Mode: Signature
## txn f
- Opcode: 0x31 {uint8 transaction field index}
-- Pops: _None_
-- Pushes: any
-- push field F of current transaction to stack
+- Stack: ... &rarr; ..., any
+- field F of current transaction
`txn` Fields (see [transaction reference](https://developer.algorand.org/docs/reference/transactions/)):
-| Index | Name | Type | Notes |
-| --- | --- | --- | --- |
-| 0 | Sender | []byte | 32 byte address |
-| 1 | Fee | uint64 | micro-Algos |
-| 2 | FirstValid | uint64 | round number |
-| 3 | FirstValidTime | uint64 | Causes program to fail; reserved for future use |
-| 4 | LastValid | uint64 | round number |
-| 5 | Note | []byte | Any data up to 1024 bytes |
-| 6 | Lease | []byte | 32 byte lease value |
-| 7 | Receiver | []byte | 32 byte address |
-| 8 | Amount | uint64 | micro-Algos |
-| 9 | CloseRemainderTo | []byte | 32 byte address |
-| 10 | VotePK | []byte | 32 byte address |
-| 11 | SelectionPK | []byte | 32 byte address |
-| 12 | VoteFirst | uint64 | The first round that the participation key is valid. |
-| 13 | VoteLast | uint64 | The last round that the participation key is valid. |
-| 14 | VoteKeyDilution | uint64 | Dilution for the 2-level participation key |
-| 15 | Type | []byte | Transaction type as bytes |
-| 16 | TypeEnum | uint64 | See table below |
-| 17 | XferAsset | uint64 | Asset ID |
-| 18 | AssetAmount | uint64 | value in Asset's units |
-| 19 | AssetSender | []byte | 32 byte address. Causes clawback of all value of asset from AssetSender if Sender is the Clawback address of the asset. |
-| 20 | AssetReceiver | []byte | 32 byte address |
-| 21 | AssetCloseTo | []byte | 32 byte address |
-| 22 | GroupIndex | uint64 | Position of this transaction within an atomic transaction group. A stand-alone transaction is implicitly element 0 in a group of 1 |
-| 23 | TxID | []byte | The computed ID for this transaction. 32 bytes. |
-| 24 | ApplicationID | uint64 | ApplicationID from ApplicationCall transaction. LogicSigVersion >= 2. |
-| 25 | OnCompletion | uint64 | ApplicationCall transaction on completion action. LogicSigVersion >= 2. |
-| 26 | ApplicationArgs | []byte | Arguments passed to the application in the ApplicationCall transaction. LogicSigVersion >= 2. |
-| 27 | NumAppArgs | uint64 | Number of ApplicationArgs. LogicSigVersion >= 2. |
-| 28 | Accounts | []byte | Accounts listed in the ApplicationCall transaction. LogicSigVersion >= 2. |
-| 29 | NumAccounts | uint64 | Number of Accounts. LogicSigVersion >= 2. |
-| 30 | ApprovalProgram | []byte | Approval program. LogicSigVersion >= 2. |
-| 31 | ClearStateProgram | []byte | Clear state program. LogicSigVersion >= 2. |
-| 32 | RekeyTo | []byte | 32 byte Sender's new AuthAddr. LogicSigVersion >= 2. |
-| 33 | ConfigAsset | uint64 | Asset ID in asset config transaction. LogicSigVersion >= 2. |
-| 34 | ConfigAssetTotal | uint64 | Total number of units of this asset created. LogicSigVersion >= 2. |
-| 35 | ConfigAssetDecimals | uint64 | Number of digits to display after the decimal place when displaying the asset. LogicSigVersion >= 2. |
-| 36 | ConfigAssetDefaultFrozen | uint64 | Whether the asset's slots are frozen by default or not, 0 or 1. LogicSigVersion >= 2. |
-| 37 | ConfigAssetUnitName | []byte | Unit name of the asset. LogicSigVersion >= 2. |
-| 38 | ConfigAssetName | []byte | The asset name. LogicSigVersion >= 2. |
-| 39 | ConfigAssetURL | []byte | URL. LogicSigVersion >= 2. |
-| 40 | ConfigAssetMetadataHash | []byte | 32 byte commitment to some unspecified asset metadata. LogicSigVersion >= 2. |
-| 41 | ConfigAssetManager | []byte | 32 byte address. LogicSigVersion >= 2. |
-| 42 | ConfigAssetReserve | []byte | 32 byte address. LogicSigVersion >= 2. |
-| 43 | ConfigAssetFreeze | []byte | 32 byte address. LogicSigVersion >= 2. |
-| 44 | ConfigAssetClawback | []byte | 32 byte address. LogicSigVersion >= 2. |
-| 45 | FreezeAsset | uint64 | Asset ID being frozen or un-frozen. LogicSigVersion >= 2. |
-| 46 | FreezeAssetAccount | []byte | 32 byte address of the account whose asset slot is being frozen or un-frozen. LogicSigVersion >= 2. |
-| 47 | FreezeAssetFrozen | uint64 | The new frozen value, 0 or 1. LogicSigVersion >= 2. |
-| 48 | Assets | uint64 | Foreign Assets listed in the ApplicationCall transaction. LogicSigVersion >= 3. |
-| 49 | NumAssets | uint64 | Number of Assets. LogicSigVersion >= 3. |
-| 50 | Applications | uint64 | Foreign Apps listed in the ApplicationCall transaction. LogicSigVersion >= 3. |
-| 51 | NumApplications | uint64 | Number of Applications. LogicSigVersion >= 3. |
-| 52 | GlobalNumUint | uint64 | Number of global state integers in ApplicationCall. LogicSigVersion >= 3. |
-| 53 | GlobalNumByteSlice | uint64 | Number of global state byteslices in ApplicationCall. LogicSigVersion >= 3. |
-| 54 | LocalNumUint | uint64 | Number of local state integers in ApplicationCall. LogicSigVersion >= 3. |
-| 55 | LocalNumByteSlice | uint64 | Number of local state byteslices in ApplicationCall. LogicSigVersion >= 3. |
-| 56 | ExtraProgramPages | uint64 | Number of additional pages for each of the application's approval and clear state programs. An ExtraProgramPages of 1 means 2048 more total bytes, or 1024 for each program. LogicSigVersion >= 4. |
-| 57 | Nonparticipation | uint64 | Marks an account nonparticipating for rewards. LogicSigVersion >= 5. |
-| 58 | Logs | []byte | Log messages emitted by an application call (itxn only). LogicSigVersion >= 5. |
-| 59 | NumLogs | uint64 | Number of Logs (itxn only). LogicSigVersion >= 5. |
-| 60 | CreatedAssetID | uint64 | Asset ID allocated by the creation of an ASA (itxn only). LogicSigVersion >= 5. |
-| 61 | CreatedApplicationID | uint64 | ApplicationID allocated by the creation of an application (itxn only). LogicSigVersion >= 5. |
+| Index | Name | Type | In | Notes |
+| - | ------ | -- | - | --------- |
+| 0 | Sender | []byte | | 32 byte address |
+| 1 | Fee | uint64 | | microalgos |
+| 2 | FirstValid | uint64 | | round number |
+| 3 | FirstValidTime | uint64 | | Causes program to fail; reserved for future use |
+| 4 | LastValid | uint64 | | round number |
+| 5 | Note | []byte | | Any data up to 1024 bytes |
+| 6 | Lease | []byte | | 32 byte lease value |
+| 7 | Receiver | []byte | | 32 byte address |
+| 8 | Amount | uint64 | | microalgos |
+| 9 | CloseRemainderTo | []byte | | 32 byte address |
+| 10 | VotePK | []byte | | 32 byte address |
+| 11 | SelectionPK | []byte | | 32 byte address |
+| 12 | VoteFirst | uint64 | | The first round that the participation key is valid. |
+| 13 | VoteLast | uint64 | | The last round that the participation key is valid. |
+| 14 | VoteKeyDilution | uint64 | | Dilution for the 2-level participation key |
+| 15 | Type | []byte | | Transaction type as bytes |
+| 16 | TypeEnum | uint64 | | See table below |
+| 17 | XferAsset | uint64 | | Asset ID |
+| 18 | AssetAmount | uint64 | | value in Asset's units |
+| 19 | AssetSender | []byte | | 32 byte address. Causes clawback of all value of asset from AssetSender if Sender is the Clawback address of the asset. |
+| 20 | AssetReceiver | []byte | | 32 byte address |
+| 21 | AssetCloseTo | []byte | | 32 byte address |
+| 22 | GroupIndex | uint64 | | Position of this transaction within an atomic transaction group. A stand-alone transaction is implicitly element 0 in a group of 1 |
+| 23 | TxID | []byte | | The computed ID for this transaction. 32 bytes. |
+| 24 | ApplicationID | uint64 | v2 | ApplicationID from ApplicationCall transaction |
+| 25 | OnCompletion | uint64 | v2 | ApplicationCall transaction on completion action |
+| 26 | ApplicationArgs | []byte | v2 | Arguments passed to the application in the ApplicationCall transaction |
+| 27 | NumAppArgs | uint64 | v2 | Number of ApplicationArgs |
+| 28 | Accounts | []byte | v2 | Accounts listed in the ApplicationCall transaction |
+| 29 | NumAccounts | uint64 | v2 | Number of Accounts |
+| 30 | ApprovalProgram | []byte | v2 | Approval program |
+| 31 | ClearStateProgram | []byte | v2 | Clear state program |
+| 32 | RekeyTo | []byte | v2 | 32 byte Sender's new AuthAddr |
+| 33 | ConfigAsset | uint64 | v2 | Asset ID in asset config transaction |
+| 34 | ConfigAssetTotal | uint64 | v2 | Total number of units of this asset created |
+| 35 | ConfigAssetDecimals | uint64 | v2 | Number of digits to display after the decimal place when displaying the asset |
+| 36 | ConfigAssetDefaultFrozen | uint64 | v2 | Whether the asset's slots are frozen by default or not, 0 or 1 |
+| 37 | ConfigAssetUnitName | []byte | v2 | Unit name of the asset |
+| 38 | ConfigAssetName | []byte | v2 | The asset name |
+| 39 | ConfigAssetURL | []byte | v2 | URL |
+| 40 | ConfigAssetMetadataHash | []byte | v2 | 32 byte commitment to some unspecified asset metadata |
+| 41 | ConfigAssetManager | []byte | v2 | 32 byte address |
+| 42 | ConfigAssetReserve | []byte | v2 | 32 byte address |
+| 43 | ConfigAssetFreeze | []byte | v2 | 32 byte address |
+| 44 | ConfigAssetClawback | []byte | v2 | 32 byte address |
+| 45 | FreezeAsset | uint64 | v2 | Asset ID being frozen or un-frozen |
+| 46 | FreezeAssetAccount | []byte | v2 | 32 byte address of the account whose asset slot is being frozen or un-frozen |
+| 47 | FreezeAssetFrozen | uint64 | v2 | The new frozen value, 0 or 1 |
+| 48 | Assets | uint64 | v3 | Foreign Assets listed in the ApplicationCall transaction |
+| 49 | NumAssets | uint64 | v3 | Number of Assets |
+| 50 | Applications | uint64 | v3 | Foreign Apps listed in the ApplicationCall transaction |
+| 51 | NumApplications | uint64 | v3 | Number of Applications |
+| 52 | GlobalNumUint | uint64 | v3 | Number of global state integers in ApplicationCall |
+| 53 | GlobalNumByteSlice | uint64 | v3 | Number of global state byteslices in ApplicationCall |
+| 54 | LocalNumUint | uint64 | v3 | Number of local state integers in ApplicationCall |
+| 55 | LocalNumByteSlice | uint64 | v3 | Number of local state byteslices in ApplicationCall |
+| 56 | ExtraProgramPages | uint64 | v4 | Number of additional pages for each of the application's approval and clear state programs. An ExtraProgramPages of 1 means 2048 more total bytes, or 1024 for each program. |
+| 57 | Nonparticipation | uint64 | v5 | Marks an account nonparticipating for rewards |
+| 58 | Logs | []byte | v5 | Log messages emitted by an application call (`itxn` only until v6). Application mode only |
+| 59 | NumLogs | uint64 | v5 | Number of Logs (`itxn` only until v6). Application mode only |
+| 60 | CreatedAssetID | uint64 | v5 | Asset ID allocated by the creation of an ASA (`itxn` only until v6). Application mode only |
+| 61 | CreatedApplicationID | uint64 | v5 | ApplicationID allocated by the creation of an application (`itxn` only until v6). Application mode only |
TypeEnum mapping:
@@ -504,92 +456,86 @@ FirstValidTime causes the program to fail. The field is reserved for future use.
## global f
- Opcode: 0x32 {uint8 global field index}
-- Pops: _None_
-- Pushes: any
-- push value from globals to stack
+- Stack: ... &rarr; ..., any
+- global field F
`global` Fields:
-| Index | Name | Type | Notes |
-| --- | --- | --- | --- |
-| 0 | MinTxnFee | uint64 | micro Algos |
-| 1 | MinBalance | uint64 | micro Algos |
-| 2 | MaxTxnLife | uint64 | rounds |
-| 3 | ZeroAddress | []byte | 32 byte address of all zero bytes |
-| 4 | GroupSize | uint64 | Number of transactions in this atomic transaction group. At least 1 |
-| 5 | LogicSigVersion | uint64 | Maximum supported TEAL version. LogicSigVersion >= 2. |
-| 6 | Round | uint64 | Current round number. LogicSigVersion >= 2. |
-| 7 | LatestTimestamp | uint64 | Last confirmed block UNIX timestamp. Fails if negative. LogicSigVersion >= 2. |
-| 8 | CurrentApplicationID | uint64 | ID of current application executing. Fails in LogicSigs. LogicSigVersion >= 2. |
-| 9 | CreatorAddress | []byte | Address of the creator of the current application. Fails if no such application is executing. LogicSigVersion >= 3. |
-| 10 | CurrentApplicationAddress | []byte | Address that the current application controls. Fails in LogicSigs. LogicSigVersion >= 5. |
-| 11 | GroupID | []byte | ID of the transaction group. 32 zero bytes if the transaction is not part of a group. LogicSigVersion >= 5. |
+| Index | Name | Type | In | Notes |
+| - | ------ | -- | - | --------- |
+| 0 | MinTxnFee | uint64 | | microalgos |
+| 1 | MinBalance | uint64 | | microalgos |
+| 2 | MaxTxnLife | uint64 | | rounds |
+| 3 | ZeroAddress | []byte | | 32 byte address of all zero bytes |
+| 4 | GroupSize | uint64 | | Number of transactions in this atomic transaction group. At least 1 |
+| 5 | LogicSigVersion | uint64 | v2 | Maximum supported version |
+| 6 | Round | uint64 | v2 | Current round number. Application mode only. |
+| 7 | LatestTimestamp | uint64 | v2 | Last confirmed block UNIX timestamp. Fails if negative. Application mode only. |
+| 8 | CurrentApplicationID | uint64 | v2 | ID of current application executing. Application mode only. |
+| 9 | CreatorAddress | []byte | v3 | Address of the creator of the current application. Application mode only. |
+| 10 | CurrentApplicationAddress | []byte | v5 | Address that the current application controls. Application mode only. |
+| 11 | GroupID | []byte | v5 | ID of the transaction group. 32 zero bytes if the transaction is not part of a group. |
+| 12 | OpcodeBudget | uint64 | v6 | The remaining cost that can be spent by opcodes in this program. |
+| 13 | CallerApplicationID | uint64 | v6 | The application ID of the application that called this application. 0 if this application is at the top-level. Application mode only. |
+| 14 | CallerApplicationAddress | []byte | v6 | The application address of the application that called this application. ZeroAddress if this application is at the top-level. Application mode only. |
## gtxn t f
- Opcode: 0x33 {uint8 transaction group index} {uint8 transaction field index}
-- Pops: _None_
-- Pushes: any
-- push field F of the Tth transaction in the current group
+- Stack: ... &rarr; ..., any
+- field F of the Tth transaction in the current group
for notes on transaction fields available, see `txn`. If this transaction is _i_ in the group, `gtxn i field` is equivalent to `txn field`.
## load i
- Opcode: 0x34 {uint8 position in scratch space to load from}
-- Pops: _None_
-- Pushes: any
-- copy a value from scratch space to the stack. All scratch spaces are 0 at program start.
+- Stack: ... &rarr; ..., any
+- Ith scratch space value. All scratch spaces are 0 at program start.
## store i
- Opcode: 0x35 {uint8 position in scratch space to store to}
-- Pops: *... stack*, any
-- Pushes: _None_
-- pop value X. store X to the Ith scratch space
+- Stack: ..., A &rarr; ...
+- store A to the Ith scratch space
## txna f i
- Opcode: 0x36 {uint8 transaction field index} {uint8 transaction field array index}
-- Pops: _None_
-- Pushes: any
-- push Ith value of the array field F of the current transaction
-- LogicSigVersion >= 2
+- Stack: ... &rarr; ..., any
+- Ith value of the array field F of the current transaction
+- Availability: v2
## gtxna t f i
- Opcode: 0x37 {uint8 transaction group index} {uint8 transaction field index} {uint8 transaction field array index}
-- Pops: _None_
-- Pushes: any
-- push Ith value of the array field F from the Tth transaction in the current group
-- LogicSigVersion >= 2
+- Stack: ... &rarr; ..., any
+- Ith value of the array field F from the Tth transaction in the current group
+- Availability: v2
## gtxns f
- Opcode: 0x38 {uint8 transaction field index}
-- Pops: *... stack*, uint64
-- Pushes: any
-- push field F of the Xth transaction in the current group
-- LogicSigVersion >= 3
+- Stack: ..., A: uint64 &rarr; ..., any
+- field F of the Ath transaction in the current group
+- Availability: v3
for notes on transaction fields available, see `txn`. If top of stack is _i_, `gtxns field` is equivalent to `gtxn _i_ field`. gtxns exists so that _i_ can be calculated, often based on the index of the current transaction.
## gtxnsa f i
- Opcode: 0x39 {uint8 transaction field index} {uint8 transaction field array index}
-- Pops: *... stack*, uint64
-- Pushes: any
-- push Ith value of the array field F from the Xth transaction in the current group
-- LogicSigVersion >= 3
+- Stack: ..., A: uint64 &rarr; ..., any
+- Ith value of the array field F from the Ath transaction in the current group
+- Availability: v3
## gload t i
- Opcode: 0x3a {uint8 transaction group index} {uint8 position in scratch space to load from}
-- Pops: _None_
-- Pushes: any
-- push Ith scratch space index of the Tth transaction in the current group
-- LogicSigVersion >= 4
+- Stack: ... &rarr; ..., any
+- Ith scratch space value of the Tth transaction in the current group
+- Availability: v4
- Mode: Application
`gload` fails unless the requested transaction is an ApplicationCall and T < GroupIndex.
@@ -597,21 +543,19 @@ for notes on transaction fields available, see `txn`. If top of stack is _i_, `g
## gloads i
- Opcode: 0x3b {uint8 position in scratch space to load from}
-- Pops: *... stack*, uint64
-- Pushes: any
-- push Ith scratch space index of the Xth transaction in the current group
-- LogicSigVersion >= 4
+- Stack: ..., A: uint64 &rarr; ..., any
+- Ith scratch space value of the Ath transaction in the current group
+- Availability: v4
- Mode: Application
-`gloads` fails unless the requested transaction is an ApplicationCall and X < GroupIndex.
+`gloads` fails unless the requested transaction is an ApplicationCall and A < GroupIndex.
## gaid t
- Opcode: 0x3c {uint8 transaction group index}
-- Pops: _None_
-- Pushes: uint64
-- push the ID of the asset or application created in the Tth transaction of the current group
-- LogicSigVersion >= 4
+- Stack: ... &rarr; ..., uint64
+- ID of the asset or application created in the Tth transaction of the current group
+- Availability: v4
- Mode: Application
`gaid` fails unless the requested transaction created an asset or application and T < GroupIndex.
@@ -619,36 +563,32 @@ for notes on transaction fields available, see `txn`. If top of stack is _i_, `g
## gaids
- Opcode: 0x3d
-- Pops: *... stack*, uint64
-- Pushes: uint64
-- push the ID of the asset or application created in the Xth transaction of the current group
-- LogicSigVersion >= 4
+- Stack: ..., A: uint64 &rarr; ..., uint64
+- ID of the asset or application created in the Ath transaction of the current group
+- Availability: v4
- Mode: Application
-`gaids` fails unless the requested transaction created an asset or application and X < GroupIndex.
+`gaids` fails unless the requested transaction created an asset or application and A < GroupIndex.
## loads
- Opcode: 0x3e
-- Pops: *... stack*, uint64
-- Pushes: any
-- copy a value from the Xth scratch space to the stack. All scratch spaces are 0 at program start.
-- LogicSigVersion >= 5
+- Stack: ..., A: uint64 &rarr; ..., any
+- Ath scratch space value. All scratch spaces are 0 at program start.
+- Availability: v5
## stores
- Opcode: 0x3f
-- Pops: *... stack*, {uint64 A}, {any B}
-- Pushes: _None_
-- pop indexes A and B. store B to the Ath scratch space
-- LogicSigVersion >= 5
+- Stack: ..., A: uint64, B &rarr; ...
+- store B to the Ath scratch space
+- Availability: v5
## bnz target
- Opcode: 0x40 {int16 branch offset, big endian}
-- Pops: *... stack*, uint64
-- Pushes: _None_
-- branch to TARGET if value X is not zero
+- Stack: ..., A: uint64 &rarr; ...
+- branch to TARGET if value A is not zero
The `bnz` instruction opcode 0x40 is followed by two immediate data bytes which are a high byte first and low byte second which together form a 16 bit offset which the instruction may branch to. For a bnz instruction at `pc`, if the last element of the stack is not zero then branch to instruction at `pc + 3 + N`, else proceed to next instruction at `pc + 3`. Branch targets must be aligned instructions. (e.g. Branching to the second byte of a 2 byte op will be rejected.) Starting at v4, the offset is treated as a signed 16 bit integer allowing for backward branches and looping. In prior version (v1 to v3), branch offsets are limited to forward branches only, 0-0x7fff.
@@ -657,265 +597,235 @@ At v2 it became allowed to branch to the end of the program exactly after the la
## bz target
- Opcode: 0x41 {int16 branch offset, big endian}
-- Pops: *... stack*, uint64
-- Pushes: _None_
-- branch to TARGET if value X is zero
-- LogicSigVersion >= 2
+- Stack: ..., A: uint64 &rarr; ...
+- branch to TARGET if value A is zero
+- Availability: v2
See `bnz` for details on how branches work. `bz` inverts the behavior of `bnz`.
## b target
- Opcode: 0x42 {int16 branch offset, big endian}
-- Pops: _None_
-- Pushes: _None_
+- Stack: ... &rarr; ...
- branch unconditionally to TARGET
-- LogicSigVersion >= 2
+- Availability: v2
See `bnz` for details on how branches work. `b` always jumps to the offset.
## return
- Opcode: 0x43
-- Pops: *... stack*, uint64
-- Pushes: _None_
-- use last value on stack as success value; end
-- LogicSigVersion >= 2
+- Stack: ..., A: uint64 &rarr; ...
+- use A as success value; end
+- Availability: v2
## assert
- Opcode: 0x44
-- Pops: *... stack*, uint64
-- Pushes: _None_
-- immediately fail unless value X is a non-zero number
-- LogicSigVersion >= 3
+- Stack: ..., A: uint64 &rarr; ...
+- immediately fail unless X is a non-zero number
+- Availability: v3
## pop
- Opcode: 0x48
-- Pops: *... stack*, any
-- Pushes: _None_
-- discard value X from stack
+- Stack: ..., A &rarr; ...
+- discard A
## dup
- Opcode: 0x49
-- Pops: *... stack*, any
-- Pushes: *... stack*, any, any
-- duplicate last value on stack
+- Stack: ..., A &rarr; ..., A, A
+- duplicate A
## dup2
- Opcode: 0x4a
-- Pops: *... stack*, {any A}, {any B}
-- Pushes: *... stack*, any, any, any, any
-- duplicate two last values on stack: A, B -> A, B, A, B
-- LogicSigVersion >= 2
+- Stack: ..., A, B &rarr; ..., A, B, A, B
+- duplicate A and B
+- Availability: v2
## dig n
- Opcode: 0x4b {uint8 depth}
-- Pops: *... stack*, any
-- Pushes: *... stack*, any, any
-- push the Nth value from the top of the stack. dig 0 is equivalent to dup
-- LogicSigVersion >= 3
+- Stack: ..., A, [N items] &rarr; ..., A, [N items], A
+- Nth value from the top of the stack. dig 0 is equivalent to dup
+- Availability: v3
## swap
- Opcode: 0x4c
-- Pops: *... stack*, {any A}, {any B}
-- Pushes: *... stack*, any, any
-- swaps two last values on stack: A, B -> B, A
-- LogicSigVersion >= 3
+- Stack: ..., A, B &rarr; ..., B, A
+- swaps A and B on stack
+- Availability: v3
## select
- Opcode: 0x4d
-- Pops: *... stack*, {any A}, {any B}, {uint64 C}
-- Pushes: any
-- selects one of two values based on top-of-stack: A, B, C -> (if C != 0 then B else A)
-- LogicSigVersion >= 3
+- Stack: ..., A, B, C &rarr; ..., A or B
+- selects one of two values based on top-of-stack: B if C != 0, else A
+- Availability: v3
## cover n
- Opcode: 0x4e {uint8 depth}
-- Pops: *... stack*, any
-- Pushes: any
+- Stack: ..., [N items], A &rarr; ..., A, [N items]
- remove top of stack, and place it deeper in the stack such that N elements are above it. Fails if stack depth <= N.
-- LogicSigVersion >= 5
+- Availability: v5
## uncover n
- Opcode: 0x4f {uint8 depth}
-- Pops: *... stack*, any
-- Pushes: any
+- Stack: ..., A, [N items] &rarr; ..., [N items], A
- remove the value at depth N in the stack and shift above items down so the Nth deep value is on top of the stack. Fails if stack depth <= N.
-- LogicSigVersion >= 5
+- Availability: v5
## concat
- Opcode: 0x50
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: []byte
-- pop two byte-arrays A and B and join them, push the result
-- LogicSigVersion >= 2
+- Stack: ..., A: []byte, B: []byte &rarr; ..., []byte
+- join A and B
+- Availability: v2
`concat` fails if the result would be greater than 4096 bytes.
## substring s e
- Opcode: 0x51 {uint8 start position} {uint8 end position}
-- Pops: *... stack*, []byte
-- Pushes: []byte
-- pop a byte-array A. For immediate values in 0..255 S and E: extract a range of bytes from A starting at S up to but not including E, push the substring result. If E < S, or either is larger than the array length, the program fails
-- LogicSigVersion >= 2
+- Stack: ..., A: []byte &rarr; ..., []byte
+- A range of bytes from A starting at S up to but not including E. If E < S, or either is larger than the array length, the program fails
+- Availability: v2
## substring3
- Opcode: 0x52
-- Pops: *... stack*, {[]byte A}, {uint64 B}, {uint64 C}
-- Pushes: []byte
-- pop a byte-array A and two integers B and C. Extract a range of bytes from A starting at B up to but not including C, push the substring result. If C < B, or either is larger than the array length, the program fails
-- LogicSigVersion >= 2
+- Stack: ..., A: []byte, B: uint64, C: uint64 &rarr; ..., []byte
+- A range of bytes from A starting at B up to but not including C. If C < B, or either is larger than the array length, the program fails
+- Availability: v2
## getbit
- Opcode: 0x53
-- Pops: *... stack*, {any A}, {uint64 B}
-- Pushes: uint64
-- pop a target A (integer or byte-array), and index B. Push the Bth bit of A.
-- LogicSigVersion >= 3
+- Stack: ..., A, B: uint64 &rarr; ..., uint64
+- Bth bit of (byte-array or integer) A.
+- Availability: v3
see explanation of bit ordering in setbit
## setbit
- Opcode: 0x54
-- Pops: *... stack*, {any A}, {uint64 B}, {uint64 C}
-- Pushes: any
-- pop a target A, index B, and bit C. Set the Bth bit of A to C, and push the result
-- LogicSigVersion >= 3
+- Stack: ..., A, B: uint64, C: uint64 &rarr; ..., any
+- Copy of (byte-array or integer) A, with the Bth bit set to (0 or 1) C
+- Availability: v3
When A is a uint64, index 0 is the least significant bit. Setting bit 3 to 1 on the integer 0 yields 8, or 2^3. When A is a byte array, index 0 is the leftmost bit of the leftmost byte. Setting bits 0 through 11 to 1 in a 4-byte-array of 0s yields the byte array 0xfff00000. Setting bit 3 to 1 on the 1-byte-array 0x00 yields the byte array 0x10.
## getbyte
- Opcode: 0x55
-- Pops: *... stack*, {[]byte A}, {uint64 B}
-- Pushes: uint64
-- pop a byte-array A and integer B. Extract the Bth byte of A and push it as an integer
-- LogicSigVersion >= 3
+- Stack: ..., A: []byte, B: uint64 &rarr; ..., uint64
+- Bth byte of A, as an integer
+- Availability: v3
## setbyte
- Opcode: 0x56
-- Pops: *... stack*, {[]byte A}, {uint64 B}, {uint64 C}
-- Pushes: []byte
-- pop a byte-array A, integer B, and small integer C (between 0..255). Set the Bth byte of A to C, and push the result
-- LogicSigVersion >= 3
+- Stack: ..., A: []byte, B: uint64, C: uint64 &rarr; ..., []byte
+- Copy of A with the Bth byte set to small integer (between 0..255) C
+- Availability: v3
## extract s l
- Opcode: 0x57 {uint8 start position} {uint8 length}
-- Pops: *... stack*, []byte
-- Pushes: []byte
-- pop a byte-array A. For immediate values in 0..255 S and L: extract a range of bytes from A starting at S up to but not including S+L, push the substring result. If L is 0, then extract to the end of the string. If S or S+L is larger than the array length, the program fails
-- LogicSigVersion >= 5
+- Stack: ..., A: []byte &rarr; ..., []byte
+- A range of bytes from A starting at S up to but not including S+L. If L is 0, then extract to the end of the string. If S or S+L is larger than the array length, the program fails
+- Availability: v5
## extract3
- Opcode: 0x58
-- Pops: *... stack*, {[]byte A}, {uint64 B}, {uint64 C}
-- Pushes: []byte
-- pop a byte-array A and two integers B and C. Extract a range of bytes from A starting at B up to but not including B+C, push the substring result. If B+C is larger than the array length, the program fails
-- LogicSigVersion >= 5
+- Stack: ..., A: []byte, B: uint64, C: uint64 &rarr; ..., []byte
+- A range of bytes from A starting at B up to but not including B+C. If B+C is larger than the array length, the program fails
+- Availability: v5
## extract_uint16
- Opcode: 0x59
-- Pops: *... stack*, {[]byte A}, {uint64 B}
-- Pushes: uint64
-- pop a byte-array A and integer B. Extract a range of bytes from A starting at B up to but not including B+2, convert bytes as big endian and push the uint64 result. If B+2 is larger than the array length, the program fails
-- LogicSigVersion >= 5
+- Stack: ..., A: []byte, B: uint64 &rarr; ..., uint64
+- A uint16 formed from a range of big-endian bytes from A starting at B up to but not including B+2. If B+2 is larger than the array length, the program fails
+- Availability: v5
## extract_uint32
- Opcode: 0x5a
-- Pops: *... stack*, {[]byte A}, {uint64 B}
-- Pushes: uint64
-- pop a byte-array A and integer B. Extract a range of bytes from A starting at B up to but not including B+4, convert bytes as big endian and push the uint64 result. If B+4 is larger than the array length, the program fails
-- LogicSigVersion >= 5
+- Stack: ..., A: []byte, B: uint64 &rarr; ..., uint64
+- A uint32 formed from a range of big-endian bytes from A starting at B up to but not including B+4. If B+4 is larger than the array length, the program fails
+- Availability: v5
## extract_uint64
- Opcode: 0x5b
-- Pops: *... stack*, {[]byte A}, {uint64 B}
-- Pushes: uint64
-- pop a byte-array A and integer B. Extract a range of bytes from A starting at B up to but not including B+8, convert bytes as big endian and push the uint64 result. If B+8 is larger than the array length, the program fails
-- LogicSigVersion >= 5
+- Stack: ..., A: []byte, B: uint64 &rarr; ..., uint64
+- A uint64 formed from a range of big-endian bytes from A starting at B up to but not including B+8. If B+8 is larger than the array length, the program fails
+- Availability: v5
## base64_decode e
- Opcode: 0x5c {uint8 encoding index}
-- Pops: *... stack*, []byte
-- Pushes: []byte
-- decode X which was base64-encoded using _encoding_ E. Fail if X is not base64 encoded with encoding E
+- Stack: ..., A: []byte &rarr; ..., []byte
+- decode A which was base64-encoded using _encoding_ E. Fail if A is not base64 encoded with encoding E
- **Cost**: 25
-- LogicSigVersion >= 6
+- Availability: v6
-Decodes X using the base64 encoding E. Specify the encoding with an immediate arg either as URL and Filename Safe (`URLEncoding`) or Standard (`StdEncoding`). See <a href="https://rfc-editor.org/rfc/rfc4648.html#section-4">RFC 4648</a> (sections 4 and 5). It is assumed that the encoding ends with the exact number of `=` padding characters as required by the RFC. When padding occurs, any unused pad bits in the encoding must be set to zero or the decoding will fail. The special cases of `\n` and `\r` are allowed but completely ignored. An error will result when attempting to decode a string with a character that is not in the encoding alphabet or not one of `=`, `\r`, or `\n`.
+Decodes A using the base64 encoding E. Specify the encoding with an immediate arg either as URL and Filename Safe (`URLEncoding`) or Standard (`StdEncoding`). See <a href="https://rfc-editor.org/rfc/rfc4648.html#section-4">RFC 4648</a> (sections 4 and 5). It is assumed that the encoding ends with the exact number of `=` padding characters as required by the RFC. When padding occurs, any unused pad bits in the encoding must be set to zero or the decoding will fail. The special cases of `\n` and `\r` are allowed but completely ignored. An error will result when attempting to decode a string with a character that is not in the encoding alphabet or not one of `=`, `\r`, or `\n`.
## balance
- Opcode: 0x60
-- Pops: *... stack*, any
-- Pushes: uint64
+- Stack: ..., A &rarr; ..., uint64
- get balance for account A, in microalgos. The balance is observed after the effects of previous transactions in the group, and after the fee for the current transaction is deducted.
-- LogicSigVersion >= 2
+- Availability: v2
- Mode: Application
-params: Before v4, Txn.Accounts offset. Since v4, Txn.Accounts offset or an account address that appears in Txn.Accounts or is Txn.Sender). Return: value.
+params: Txn.Accounts offset (or, since v4, an _available_ account address), _available_ application id (or, since v4, a Txn.ForeignApps offset). Return: value.
## app_opted_in
- Opcode: 0x61
-- Pops: *... stack*, {any A}, {uint64 B}
-- Pushes: uint64
-- check if account A opted in for the application B => {0 or 1}
-- LogicSigVersion >= 2
+- Stack: ..., A, B: uint64 &rarr; ..., uint64
+- 1 if account A is opted in to application B, else 0
+- Availability: v2
- Mode: Application
-params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), application id (or, since v4, a Txn.ForeignApps offset). Return: 1 if opted in and 0 otherwise.
+params: Txn.Accounts offset (or, since v4, an _available_ account address), _available_ application id (or, since v4, a Txn.ForeignApps offset). Return: 1 if opted in and 0 otherwise.
## app_local_get
- Opcode: 0x62
-- Pops: *... stack*, {any A}, {[]byte B}
-- Pushes: any
-- read from account A from local state of the current application key B => value
-- LogicSigVersion >= 2
+- Stack: ..., A, B: []byte &rarr; ..., any
+- local state of the key B in the current application in account A
+- Availability: v2
- Mode: Application
-params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), state key. Return: value. The value is zero (of type uint64) if the key does not exist.
+params: Txn.Accounts offset (or, since v4, an _available_ account address), state key. Return: value. The value is zero (of type uint64) if the key does not exist.
## app_local_get_ex
- Opcode: 0x63
-- Pops: *... stack*, {any A}, {uint64 B}, {[]byte C}
-- Pushes: *... stack*, any, uint64
-- read from account A from local state of the application B key C => [*... stack*, value, 0 or 1]
-- LogicSigVersion >= 2
+- Stack: ..., A, B: uint64, C: []byte &rarr; ..., X: any, Y: uint64
+- X is the local state of application B, key C in account A. Y is 1 if key existed, else 0
+- Availability: v2
- Mode: Application
-params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), application id (or, since v4, a Txn.ForeignApps offset), state key. Return: did_exist flag (top of the stack, 1 if the application and key existed and 0 otherwise), value. The value is zero (of type uint64) if the key does not exist.
+params: Txn.Accounts offset (or, since v4, an _available_ account address), _available_ application id (or, since v4, a Txn.ForeignApps offset), state key. Return: did_exist flag (top of the stack, 1 if the application and key existed and 0 otherwise), value. The value is zero (of type uint64) if the key does not exist.
## app_global_get
- Opcode: 0x64
-- Pops: *... stack*, []byte
-- Pushes: any
-- read key A from global state of a current application => value
-- LogicSigVersion >= 2
+- Stack: ..., A: []byte &rarr; ..., any
+- global state of the key A in the current application
+- Availability: v2
- Mode: Application
params: state key. Return: value. The value is zero (of type uint64) if the key does not exist.
@@ -923,121 +833,113 @@ params: state key. Return: value. The value is zero (of type uint64) if the key
## app_global_get_ex
- Opcode: 0x65
-- Pops: *... stack*, {uint64 A}, {[]byte B}
-- Pushes: *... stack*, any, uint64
-- read from application A global state key B => [*... stack*, value, 0 or 1]
-- LogicSigVersion >= 2
+- Stack: ..., A: uint64, B: []byte &rarr; ..., X: any, Y: uint64
+- X is the global state of application A, key B. Y is 1 if key existed, else 0
+- Availability: v2
- Mode: Application
-params: Txn.ForeignApps offset (or, since v4, an application id that appears in Txn.ForeignApps or is the CurrentApplicationID), state key. Return: did_exist flag (top of the stack, 1 if the application and key existed and 0 otherwise), value. The value is zero (of type uint64) if the key does not exist.
+params: Txn.ForeignApps offset (or, since v4, an _available_ application id), state key. Return: did_exist flag (top of the stack, 1 if the application and key existed and 0 otherwise), value. The value is zero (of type uint64) if the key does not exist.
## app_local_put
- Opcode: 0x66
-- Pops: *... stack*, {any A}, {[]byte B}, {any C}
-- Pushes: _None_
-- write to account specified by A to local state of a current application key B with value C
-- LogicSigVersion >= 2
+- Stack: ..., A, B: []byte, C &rarr; ...
+- write C to key B in account A's local state of the current application
+- Availability: v2
- Mode: Application
-params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), state key, value.
+params: Txn.Accounts offset (or, since v4, an _available_ account address), state key, value.
## app_global_put
- Opcode: 0x67
-- Pops: *... stack*, {[]byte A}, {any B}
-- Pushes: _None_
-- write key A and value B to global state of the current application
-- LogicSigVersion >= 2
+- Stack: ..., A: []byte, B &rarr; ...
+- write B to key A in the global state of the current application
+- Availability: v2
- Mode: Application
## app_local_del
- Opcode: 0x68
-- Pops: *... stack*, {any A}, {[]byte B}
-- Pushes: _None_
-- delete from account A local state key B of the current application
-- LogicSigVersion >= 2
+- Stack: ..., A, B: []byte &rarr; ...
+- delete key B from account A's local state of the current application
+- Availability: v2
- Mode: Application
-params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), state key.
+params: Txn.Accounts offset (or, since v4, an _available_ account address), state key.
Deleting a key which is already absent has no effect on the application local state. (In particular, it does _not_ cause the program to fail.)
## app_global_del
- Opcode: 0x69
-- Pops: *... stack*, []byte
-- Pushes: _None_
-- delete key A from a global state of the current application
-- LogicSigVersion >= 2
+- Stack: ..., A: []byte &rarr; ...
+- delete key A from the global state of the current application
+- Availability: v2
- Mode: Application
params: state key.
Deleting a key which is already absent has no effect on the application global state. (In particular, it does _not_ cause the program to fail.)
-## asset_holding_get i
+## asset_holding_get f
- Opcode: 0x70 {uint8 asset holding field index}
-- Pops: *... stack*, {any A}, {uint64 B}
-- Pushes: *... stack*, any, uint64
-- read from account A and asset B holding field X (imm arg) => {0 or 1 (top), value}
-- LogicSigVersion >= 2
+- Stack: ..., A, B: uint64 &rarr; ..., X: any, Y: uint64
+- X is field F from account A's holding of asset B. Y is 1 if A is opted into B, else 0
+- Availability: v2
- Mode: Application
`asset_holding_get` Fields:
| Index | Name | Type | Notes |
-| --- | --- | --- | --- |
+| - | ------ | -- | --------- |
| 0 | AssetBalance | uint64 | Amount of the asset unit held by this account |
| 1 | AssetFrozen | uint64 | Is the asset frozen or not |
-params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), asset id (or, since v4, a Txn.ForeignAssets offset). Return: did_exist flag (1 if the asset existed and 0 otherwise), value.
+params: Txn.Accounts offset (or, since v4, an _available_ address), asset id (or, since v4, a Txn.ForeignAssets offset). Return: did_exist flag (1 if the asset existed and 0 otherwise), value.
-## asset_params_get i
+## asset_params_get f
- Opcode: 0x71 {uint8 asset params field index}
-- Pops: *... stack*, uint64
-- Pushes: *... stack*, any, uint64
-- read from asset A params field X (imm arg) => {0 or 1 (top), value}
-- LogicSigVersion >= 2
+- Stack: ..., A: uint64 &rarr; ..., X: any, Y: uint64
+- X is field F from asset A. Y is 1 if A exists, else 0
+- Availability: v2
- Mode: Application
`asset_params_get` Fields:
-| Index | Name | Type | Notes |
-| --- | --- | --- | --- |
-| 0 | AssetTotal | uint64 | Total number of units of this asset |
-| 1 | AssetDecimals | uint64 | See AssetParams.Decimals |
-| 2 | AssetDefaultFrozen | uint64 | Frozen by default or not |
-| 3 | AssetUnitName | []byte | Asset unit name |
-| 4 | AssetName | []byte | Asset name |
-| 5 | AssetURL | []byte | URL with additional info about the asset |
-| 6 | AssetMetadataHash | []byte | Arbitrary commitment |
-| 7 | AssetManager | []byte | Manager commitment |
-| 8 | AssetReserve | []byte | Reserve address |
-| 9 | AssetFreeze | []byte | Freeze address |
-| 10 | AssetClawback | []byte | Clawback address |
-| 11 | AssetCreator | []byte | Creator address. LogicSigVersion >= 5. |
+| Index | Name | Type | In | Notes |
+| - | ------ | -- | - | --------- |
+| 0 | AssetTotal | uint64 | | Total number of units of this asset |
+| 1 | AssetDecimals | uint64 | | See AssetParams.Decimals |
+| 2 | AssetDefaultFrozen | uint64 | | Frozen by default or not |
+| 3 | AssetUnitName | []byte | | Asset unit name |
+| 4 | AssetName | []byte | | Asset name |
+| 5 | AssetURL | []byte | | URL with additional info about the asset |
+| 6 | AssetMetadataHash | []byte | | Arbitrary commitment |
+| 7 | AssetManager | []byte | | Manager commitment |
+| 8 | AssetReserve | []byte | | Reserve address |
+| 9 | AssetFreeze | []byte | | Freeze address |
+| 10 | AssetClawback | []byte | | Clawback address |
+| 11 | AssetCreator | []byte | v5 | Creator address |
-params: Before v4, Txn.ForeignAssets offset. Since v4, Txn.ForeignAssets offset or an asset id that appears in Txn.ForeignAssets. Return: did_exist flag (1 if the asset existed and 0 otherwise), value.
+params: Txn.ForeignAssets offset (or, since v4, an _available_ asset id. Return: did_exist flag (1 if the asset existed and 0 otherwise), value.
-## app_params_get i
+## app_params_get f
- Opcode: 0x72 {uint8 app params field index}
-- Pops: *... stack*, uint64
-- Pushes: *... stack*, any, uint64
-- read from app A params field X (imm arg) => {0 or 1 (top), value}
-- LogicSigVersion >= 5
+- Stack: ..., A: uint64 &rarr; ..., X: any, Y: uint64
+- X is field F from app A. Y is 1 if A exists, else 0
+- Availability: v5
- Mode: Application
`app_params_get` Fields:
| Index | Name | Type | Notes |
-| --- | --- | --- | --- |
+| - | ------ | -- | --------- |
| 0 | AppApprovalProgram | []byte | Bytecode of Approval Program |
| 1 | AppClearStateProgram | []byte | Bytecode of Clear State Program |
| 2 | AppGlobalNumUint | uint64 | Number of uint64 values allowed in Global State |
@@ -1049,255 +951,243 @@ params: Before v4, Txn.ForeignAssets offset. Since v4, Txn.ForeignAssets offset
| 8 | AppAddress | []byte | Address for which this application has authority |
-params: Txn.ForeignApps offset or an app id that appears in Txn.ForeignApps. Return: did_exist flag (1 if the application existed and 0 otherwise), value.
+params: Txn.ForeignApps offset or an _available_ app id. Return: did_exist flag (1 if the application existed and 0 otherwise), value.
+
+## acct_params_get f
+
+- Opcode: 0x73 {uint8 account params field index}
+- Stack: ..., A: uint64 &rarr; ..., X: any, Y: uint64
+- X is field F from account A. Y is 1 if A owns positive algos, else 0
+- Availability: v6
+- Mode: Application
## min_balance
- Opcode: 0x78
-- Pops: *... stack*, any
-- Pushes: uint64
+- Stack: ..., A &rarr; ..., uint64
- get minimum required balance for account A, in microalgos. Required balance is affected by [ASA](https://developer.algorand.org/docs/features/asa/#assets-overview) and [App](https://developer.algorand.org/docs/features/asc1/stateful/#minimum-balance-requirement-for-a-smart-contract) usage. When creating or opting into an app, the minimum balance grows before the app code runs, therefore the increase is visible there. When deleting or closing out, the minimum balance decreases after the app executes.
-- LogicSigVersion >= 3
+- Availability: v3
- Mode: Application
-params: Before v4, Txn.Accounts offset. Since v4, Txn.Accounts offset or an account address that appears in Txn.Accounts or is Txn.Sender). Return: value.
+params: Txn.Accounts offset (or, since v4, an _available_ account address), _available_ application id (or, since v4, a Txn.ForeignApps offset). Return: value.
## pushbytes bytes
- Opcode: 0x80 {varuint length} {bytes}
-- Pops: _None_
-- Pushes: []byte
-- push the following program bytes to the stack
-- LogicSigVersion >= 3
+- Stack: ... &rarr; ..., []byte
+- immediate BYTES
+- Availability: v3
pushbytes args are not added to the bytecblock during assembly processes
## pushint uint
- Opcode: 0x81 {varuint int}
-- Pops: _None_
-- Pushes: uint64
-- push immediate UINT to the stack as an integer
-- LogicSigVersion >= 3
+- Stack: ... &rarr; ..., uint64
+- immediate UINT
+- Availability: v3
pushint args are not added to the intcblock during assembly processes
## callsub target
- Opcode: 0x88 {int16 branch offset, big endian}
-- Pops: _None_
-- Pushes: _None_
+- Stack: ... &rarr; ...
- branch unconditionally to TARGET, saving the next instruction on the call stack
-- LogicSigVersion >= 4
+- Availability: v4
The call stack is separate from the data stack. Only `callsub` and `retsub` manipulate it.
## retsub
- Opcode: 0x89
-- Pops: _None_
-- Pushes: _None_
+- Stack: ... &rarr; ...
- pop the top instruction from the call stack and branch to it
-- LogicSigVersion >= 4
+- Availability: v4
The call stack is separate from the data stack. Only `callsub` and `retsub` manipulate it.
## shl
- Opcode: 0x90
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A times 2^B, modulo 2^64
-- LogicSigVersion >= 4
+- Availability: v4
## shr
- Opcode: 0x91
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A divided by 2^B
-- LogicSigVersion >= 4
+- Availability: v4
## sqrt
- Opcode: 0x92
-- Pops: *... stack*, uint64
-- Pushes: uint64
-- The largest integer B such that B^2 <= X
+- Stack: ..., A: uint64 &rarr; ..., uint64
+- The largest integer I such that I^2 <= A
- **Cost**: 4
-- LogicSigVersion >= 4
+- Availability: v4
## bitlen
- Opcode: 0x93
-- Pops: *... stack*, any
-- Pushes: uint64
-- The highest set bit in X. If X is a byte-array, it is interpreted as a big-endian unsigned integer. bitlen of 0 is 0, bitlen of 8 is 4
-- LogicSigVersion >= 4
+- Stack: ..., A &rarr; ..., uint64
+- The highest set bit in A. If A is a byte-array, it is interpreted as a big-endian unsigned integer. bitlen of 0 is 0, bitlen of 8 is 4
+- Availability: v4
bitlen interprets arrays as big-endian integers, unlike setbit/getbit
## exp
- Opcode: 0x94
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: uint64
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., uint64
- A raised to the Bth power. Fail if A == B == 0 and on overflow
-- LogicSigVersion >= 4
+- Availability: v4
## expw
- Opcode: 0x95
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: *... stack*, uint64, uint64
-- A raised to the Bth power as a 128-bit long result as low (top) and high uint64 values on the stack. Fail if A == B == 0 or if the results exceeds 2^128-1
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., X: uint64, Y: uint64
+- A raised to the Bth power as a 128-bit result in two uint64s. X is the high 64 bits, Y is the low. Fail if A == B == 0 or if the results exceeds 2^128-1
- **Cost**: 10
-- LogicSigVersion >= 4
+- Availability: v4
+
+## bsqrt
+
+- Opcode: 0x96
+- Stack: ..., A: []byte &rarr; ..., []byte
+- The largest integer I such that I^2 <= A. A and I are interpreted as big-endian unsigned integers
+- **Cost**: 40
+- Availability: v6
## b+
- Opcode: 0xa0
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: []byte
-- A plus B, where A and B are byte-arrays interpreted as big-endian unsigned integers
+- Stack: ..., A: []byte, B: []byte &rarr; ..., []byte
+- A plus B. A and B are interpreted as big-endian unsigned integers
- **Cost**: 10
-- LogicSigVersion >= 4
+- Availability: v4
## b-
- Opcode: 0xa1
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: []byte
-- A minus B, where A and B are byte-arrays interpreted as big-endian unsigned integers. Fail on underflow.
+- Stack: ..., A: []byte, B: []byte &rarr; ..., []byte
+- A minus B. A and B are interpreted as big-endian unsigned integers. Fail on underflow.
- **Cost**: 10
-- LogicSigVersion >= 4
+- Availability: v4
## b/
- Opcode: 0xa2
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: []byte
-- A divided by B (truncated division), where A and B are byte-arrays interpreted as big-endian unsigned integers. Fail if B is zero.
+- Stack: ..., A: []byte, B: []byte &rarr; ..., []byte
+- A divided by B (truncated division). A and B are interpreted as big-endian unsigned integers. Fail if B is zero.
- **Cost**: 20
-- LogicSigVersion >= 4
+- Availability: v4
## b*
- Opcode: 0xa3
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: []byte
-- A times B, where A and B are byte-arrays interpreted as big-endian unsigned integers.
+- Stack: ..., A: []byte, B: []byte &rarr; ..., []byte
+- A times B. A and B are interpreted as big-endian unsigned integers.
- **Cost**: 20
-- LogicSigVersion >= 4
+- Availability: v4
## b<
- Opcode: 0xa4
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: uint64
-- A is less than B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}
-- LogicSigVersion >= 4
+- Stack: ..., A: []byte, B: []byte &rarr; ..., uint64
+- 1 if A is less than B, else 0. A and B are interpreted as big-endian unsigned integers
+- Availability: v4
## b>
- Opcode: 0xa5
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: uint64
-- A is greater than B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}
-- LogicSigVersion >= 4
+- Stack: ..., A: []byte, B: []byte &rarr; ..., uint64
+- 1 if A is greater than B, else 0. A and B are interpreted as big-endian unsigned integers
+- Availability: v4
## b<=
- Opcode: 0xa6
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: uint64
-- A is less than or equal to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}
-- LogicSigVersion >= 4
+- Stack: ..., A: []byte, B: []byte &rarr; ..., uint64
+- 1 if A is less than or equal to B, else 0. A and B are interpreted as big-endian unsigned integers
+- Availability: v4
## b>=
- Opcode: 0xa7
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: uint64
-- A is greater than or equal to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}
-- LogicSigVersion >= 4
+- Stack: ..., A: []byte, B: []byte &rarr; ..., uint64
+- 1 if A is greater than or equal to B, else 0. A and B are interpreted as big-endian unsigned integers
+- Availability: v4
## b==
- Opcode: 0xa8
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: uint64
-- A is equals to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}
-- LogicSigVersion >= 4
+- Stack: ..., A: []byte, B: []byte &rarr; ..., uint64
+- 1 if A is equal to B, else 0. A and B are interpreted as big-endian unsigned integers
+- Availability: v4
## b!=
- Opcode: 0xa9
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: uint64
-- A is not equal to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}
-- LogicSigVersion >= 4
+- Stack: ..., A: []byte, B: []byte &rarr; ..., uint64
+- 0 if A is equal to B, else 1. A and B are interpreted as big-endian unsigned integers
+- Availability: v4
## b%
- Opcode: 0xaa
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: []byte
-- A modulo B, where A and B are byte-arrays interpreted as big-endian unsigned integers. Fail if B is zero.
+- Stack: ..., A: []byte, B: []byte &rarr; ..., []byte
+- A modulo B. A and B are interpreted as big-endian unsigned integers. Fail if B is zero.
- **Cost**: 20
-- LogicSigVersion >= 4
+- Availability: v4
## b|
- Opcode: 0xab
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: []byte
-- A bitwise-or B, where A and B are byte-arrays, zero-left extended to the greater of their lengths
+- Stack: ..., A: []byte, B: []byte &rarr; ..., []byte
+- A bitwise-or B. A and B are zero-left extended to the greater of their lengths
- **Cost**: 6
-- LogicSigVersion >= 4
+- Availability: v4
## b&
- Opcode: 0xac
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: []byte
-- A bitwise-and B, where A and B are byte-arrays, zero-left extended to the greater of their lengths
+- Stack: ..., A: []byte, B: []byte &rarr; ..., []byte
+- A bitwise-and B. A and B are zero-left extended to the greater of their lengths
- **Cost**: 6
-- LogicSigVersion >= 4
+- Availability: v4
## b^
- Opcode: 0xad
-- Pops: *... stack*, {[]byte A}, {[]byte B}
-- Pushes: []byte
-- A bitwise-xor B, where A and B are byte-arrays, zero-left extended to the greater of their lengths
+- Stack: ..., A: []byte, B: []byte &rarr; ..., []byte
+- A bitwise-xor B. A and B are zero-left extended to the greater of their lengths
- **Cost**: 6
-- LogicSigVersion >= 4
+- Availability: v4
## b~
- Opcode: 0xae
-- Pops: *... stack*, []byte
-- Pushes: []byte
-- X with all bits inverted
+- Stack: ..., A: []byte &rarr; ..., []byte
+- A with all bits inverted
- **Cost**: 4
-- LogicSigVersion >= 4
+- Availability: v4
## bzero
- Opcode: 0xaf
-- Pops: *... stack*, uint64
-- Pushes: []byte
-- push a byte-array of length X, containing all zero bytes
-- LogicSigVersion >= 4
+- Stack: ..., A: uint64 &rarr; ..., []byte
+- zero filled byte-array of length A
+- Availability: v4
## log
- Opcode: 0xb0
-- Pops: *... stack*, []byte
-- Pushes: _None_
-- write bytes to log state of the current application
-- LogicSigVersion >= 5
+- Stack: ..., A: []byte &rarr; ...
+- write A to log state of the current application
+- Availability: v5
- Mode: Application
`log` fails if called more than MaxLogCalls times in a program, or if the sum of logged bytes exceeds 1024 bytes.
@@ -1305,32 +1195,29 @@ bitlen interprets arrays as big-endian integers, unlike setbit/getbit
## itxn_begin
- Opcode: 0xb1
-- Pops: _None_
-- Pushes: _None_
+- Stack: ... &rarr; ...
- begin preparation of a new inner transaction in a new transaction group
-- LogicSigVersion >= 5
+- Availability: v5
- Mode: Application
-`itxn_begin` initializes Sender to the application address; Fee to the minimum allowable, taking into account MinTxnFee and credit from overpaying in earlier transactions; FirstValid/LastValid to the values in the top-level transaction, and all other fields to zero values.
+`itxn_begin` initializes Sender to the application address; Fee to the minimum allowable, taking into account MinTxnFee and credit from overpaying in earlier transactions; FirstValid/LastValid to the values in the invoking transaction, and all other fields to zero or empty values.
## itxn_field f
- Opcode: 0xb2 {uint8 transaction field index}
-- Pops: *... stack*, any
-- Pushes: _None_
-- set field F of the current inner transaction to X
-- LogicSigVersion >= 5
+- Stack: ..., A &rarr; ...
+- set field F of the current inner transaction to A
+- Availability: v5
- Mode: Application
-`itxn_field` fails if X is of the wrong type for F, including a byte array of the wrong size for use as an address when F is an address field. `itxn_field` also fails if X is an account or asset that does not appear in `txn.Accounts` or `txn.ForeignAssets` of the top-level transaction. (Setting addresses in asset creation are exempted from this requirement.)
+`itxn_field` fails if A is of the wrong type for F, including a byte array of the wrong size for use as an address when F is an address field. `itxn_field` also fails if A is an account, asset, or app that is not _available_. (Addresses set into asset params of acfg transactions need not be _available_.)
## itxn_submit
- Opcode: 0xb3
-- Pops: _None_
-- Pushes: _None_
-- execute the current inner transaction group. Fail if executing this group would exceed 16 total inner transactions, or if any transaction in the group fails.
-- LogicSigVersion >= 5
+- Stack: ... &rarr; ...
+- execute the current inner transaction group. Fail if executing this group would exceed the inner transaction limit, or if any transaction in the group fails.
+- Availability: v5
- Mode: Application
`itxn_submit` resets the current transaction so that it can not be resubmitted. A new `itxn_begin` is required to prepare another inner transaction.
@@ -1338,59 +1225,78 @@ bitlen interprets arrays as big-endian integers, unlike setbit/getbit
## itxn f
- Opcode: 0xb4 {uint8 transaction field index}
-- Pops: _None_
-- Pushes: any
-- push field F of the last inner transaction to stack
-- LogicSigVersion >= 5
+- Stack: ... &rarr; ..., any
+- field F of the last inner transaction
+- Availability: v5
- Mode: Application
## itxna f i
- Opcode: 0xb5 {uint8 transaction field index} {uint8 transaction field array index}
-- Pops: _None_
-- Pushes: any
-- push Ith value of the array field F of the last inner transaction to stack
-- LogicSigVersion >= 5
+- Stack: ... &rarr; ..., any
+- Ith value of the array field F of the last inner transaction
+- Availability: v5
- Mode: Application
## itxn_next
- Opcode: 0xb6
-- Pops: _None_
-- Pushes: _None_
+- Stack: ... &rarr; ...
- begin preparation of a new inner transaction in the same transaction group
-- LogicSigVersion >= 6
+- Availability: v6
+- Mode: Application
+
+`itxn_next` initializes the transaction exactly as `itxn_begin` does
+
+## gitxn t f
+
+- Opcode: 0xb7 {uint8 transaction group index} {uint8 transaction field index}
+- Stack: ... &rarr; ..., any
+- field F of the Tth transaction in the last inner group submitted
+- Availability: v6
+- Mode: Application
+
+## gitxna t f i
+
+- Opcode: 0xb8 {uint8 transaction group index} {uint8 transaction field index} {uint8 transaction field array index}
+- Stack: ... &rarr; ..., any
+- Ith value of the array field F from the Tth transaction in the last inner group submitted
+- Availability: v6
- Mode: Application
## txnas f
- Opcode: 0xc0 {uint8 transaction field index}
-- Pops: *... stack*, uint64
-- Pushes: any
-- push Xth value of the array field F of the current transaction
-- LogicSigVersion >= 5
+- Stack: ..., A: uint64 &rarr; ..., any
+- Ath value of the array field F of the current transaction
+- Availability: v5
## gtxnas t f
- Opcode: 0xc1 {uint8 transaction group index} {uint8 transaction field index}
-- Pops: *... stack*, uint64
-- Pushes: any
-- push Xth value of the array field F from the Tth transaction in the current group
-- LogicSigVersion >= 5
+- Stack: ..., A: uint64 &rarr; ..., any
+- Ath value of the array field F from the Tth transaction in the current group
+- Availability: v5
## gtxnsas f
- Opcode: 0xc2 {uint8 transaction field index}
-- Pops: *... stack*, {uint64 A}, {uint64 B}
-- Pushes: any
-- pop an index A and an index B. push Bth value of the array field F from the Ath transaction in the current group
-- LogicSigVersion >= 5
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., any
+- Bth value of the array field F from the Ath transaction in the current group
+- Availability: v5
## args
- Opcode: 0xc3
-- Pops: *... stack*, uint64
-- Pushes: []byte
-- push Xth LogicSig argument to stack
-- LogicSigVersion >= 5
+- Stack: ..., A: uint64 &rarr; ..., []byte
+- Ath LogicSig argument
+- Availability: v5
- Mode: Signature
+
+## gloadss
+
+- Opcode: 0xc4
+- Stack: ..., A: uint64, B: uint64 &rarr; ..., any
+- Bth scratch space value of the Ath transaction in the current group
+- Availability: v6
+- Mode: Application
diff --git a/data/transactions/logic/assembler.go b/data/transactions/logic/assembler.go
index b7a6018e5..cdf91df68 100644
--- a/data/transactions/logic/assembler.go
+++ b/data/transactions/logic/assembler.go
@@ -247,14 +247,6 @@ type OpStream struct {
HasStatefulOps bool
}
-// GetVersion returns the LogicSigVersion we're building to
-func (ops *OpStream) GetVersion() uint64 {
- if ops.Version == 0 {
- ops.Version = AssemblerDefaultVersion
- }
- return ops.Version
-}
-
// createLabel inserts a label reference to point to the next
// instruction, reporting an error for a duplicate.
func (ops *OpStream) createLabel(label string) {
@@ -437,7 +429,7 @@ func assembleIntC(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.error("intc operation needs one argument")
}
- constIndex, err := strconv.ParseUint(args[0], 0, 64)
+ constIndex, err := simpleImm(args[0], "constant")
if err != nil {
return ops.error(err)
}
@@ -448,7 +440,7 @@ func assembleByteC(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.error("bytec operation needs one argument")
}
- constIndex, err := strconv.ParseUint(args[0], 0, 64)
+ constIndex, err := simpleImm(args[0], "constant")
if err != nil {
return ops.error(err)
}
@@ -738,7 +730,7 @@ func assembleArg(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.error("arg operation needs one argument")
}
- val, err := strconv.ParseUint(args[0], 0, 64)
+ val, err := simpleImm(args[0], "argument")
if err != nil {
return ops.error(err)
}
@@ -786,20 +778,42 @@ func assembleSubstring(ops *OpStream, spec *OpSpec, args []string) error {
return nil
}
-func assembleTxn(ops *OpStream, spec *OpSpec, args []string) error {
- if len(args) != 1 {
- return ops.error("txn expects one argument")
- }
- fs, ok := txnFieldSpecByName[args[0]]
+func txnFieldImm(name string, expectArray bool, ops *OpStream) (*txnFieldSpec, error) {
+ fs, ok := TxnFieldSpecByName[name]
if !ok {
- return ops.errorf("txn unknown field: %#v", args[0])
+ return nil, fmt.Errorf("unknown field: %#v", name)
}
- _, ok = txnaFieldSpecByField[fs.field]
- if ok {
- return ops.errorf("found array field %#v in txn op", args[0])
+ if expectArray != fs.array {
+ if expectArray {
+ return nil, fmt.Errorf("found scalar field %#v while expecting array", name)
+ }
+ return nil, fmt.Errorf("found array field %#v while expecting scalar", name)
}
if fs.version > ops.Version {
- return ops.errorf("field %#v available in version %d. Missed #pragma version?", args[0], fs.version)
+ return nil,
+ fmt.Errorf("field %#v available in version %d. Missed #pragma version?", name, fs.version)
+ }
+ return &fs, nil
+}
+
+func simpleImm(value string, label string) (uint64, error) {
+ res, err := strconv.ParseUint(value, 0, 64)
+ if err != nil {
+ return 0, fmt.Errorf("unable to parse %s %#v as integer", label, value)
+ }
+ if res > 255 {
+ return 0, fmt.Errorf("%s beyond 255: %d", label, res)
+ }
+ return res, err
+}
+
+func assembleTxn(ops *OpStream, spec *OpSpec, args []string) error {
+ if len(args) != 1 {
+ return ops.error("txn expects one argument")
+ }
+ fs, err := txnFieldImm(args[0], false, ops)
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
ops.pending.WriteByte(spec.Opcode)
ops.pending.WriteByte(uint8(fs.field))
@@ -823,23 +837,13 @@ func assembleTxna(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 2 {
return ops.error("txna expects two immediate arguments")
}
- fs, ok := txnFieldSpecByName[args[0]]
- if !ok {
- return ops.errorf("txna unknown field: %#v", args[0])
- }
- _, ok = txnaFieldSpecByField[fs.field]
- if !ok {
- return ops.errorf("txna unknown field: %#v", args[0])
- }
- if fs.version > ops.Version {
- return ops.errorf("txna %#v available in version %d. Missed #pragma version?", args[0], fs.version)
- }
- arrayFieldIdx, err := strconv.ParseUint(args[1], 0, 64)
+ fs, err := txnFieldImm(args[0], true, ops)
if err != nil {
- return ops.error(err)
+ return ops.errorf("%s %w", spec.Name, err)
}
- if arrayFieldIdx > 255 {
- return ops.errorf("txna array index beyond 255: %d", arrayFieldIdx)
+ arrayFieldIdx, err := simpleImm(args[1], "array index")
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
ops.pending.WriteByte(spec.Opcode)
@@ -853,16 +857,9 @@ func assembleTxnas(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.error("txnas expects one immediate argument")
}
- fs, ok := txnFieldSpecByName[args[0]]
- if !ok {
- return ops.errorf("txnas unknown field: %#v", args[0])
- }
- _, ok = txnaFieldSpecByField[fs.field]
- if !ok {
- return ops.errorf("txnas unknown field: %#v", args[0])
- }
- if fs.version > ops.Version {
- return ops.errorf("txnas %#v available in version %d. Missed #pragma version?", args[0], fs.version)
+ fs, err := txnFieldImm(args[0], true, ops)
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
ops.pending.WriteByte(spec.Opcode)
@@ -875,24 +872,13 @@ func assembleGtxn(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 2 {
return ops.error("gtxn expects two arguments")
}
- slot, err := strconv.ParseUint(args[0], 0, 64)
+ slot, err := simpleImm(args[0], "transaction index")
if err != nil {
- return ops.error(err)
- }
- if slot > 255 {
- return ops.errorf("%s transaction index beyond 255: %d", spec.Name, slot)
+ return ops.errorf("%s %w", spec.Name, err)
}
-
- fs, ok := txnFieldSpecByName[args[1]]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[1])
- }
- _, ok = txnaFieldSpecByField[fs.field]
- if ok {
- return ops.errorf("found array field %#v in %s op", args[1], spec.Name)
- }
- if fs.version > ops.Version {
- return ops.errorf("field %#v available in version %d. Missed #pragma version?", args[1], fs.version)
+ fs, err := txnFieldImm(args[1], false, ops)
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
ops.pending.WriteByte(spec.Opcode)
@@ -917,31 +903,17 @@ func assembleGtxna(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 3 {
return ops.errorf("%s expects three arguments", spec.Name)
}
- slot, err := strconv.ParseUint(args[0], 0, 64)
+ slot, err := simpleImm(args[0], "transaction index")
if err != nil {
- return ops.error(err)
- }
- if slot > 255 {
- return ops.errorf("%s group index beyond 255: %d", spec.Name, slot)
- }
-
- fs, ok := txnFieldSpecByName[args[1]]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[1])
+ return ops.errorf("%s %w", spec.Name, err)
}
- _, ok = txnaFieldSpecByField[fs.field]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[1])
- }
- if fs.version > ops.Version {
- return ops.errorf("%s %#v available in version %d. Missed #pragma version?", spec.Name, args[1], fs.version)
- }
- arrayFieldIdx, err := strconv.ParseUint(args[2], 0, 64)
+ fs, err := txnFieldImm(args[1], true, ops)
if err != nil {
- return ops.error(err)
+ return ops.errorf("%s %w", spec.Name, err)
}
- if arrayFieldIdx > 255 {
- return ops.errorf("%s array index beyond 255: %d", spec.Name, arrayFieldIdx)
+ arrayFieldIdx, err := simpleImm(args[2], "array index")
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
ops.pending.WriteByte(spec.Opcode)
@@ -956,25 +928,13 @@ func assembleGtxnas(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 2 {
return ops.errorf("%s expects two immediate arguments", spec.Name)
}
-
- slot, err := strconv.ParseUint(args[0], 0, 64)
+ slot, err := simpleImm(args[0], "transaction index")
if err != nil {
- return ops.error(err)
+ return ops.errorf("%s %w", spec.Name, err)
}
- if slot > 255 {
- return ops.errorf("%s group index beyond 255: %d", spec.Name, slot)
- }
-
- fs, ok := txnFieldSpecByName[args[1]]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[1])
- }
- _, ok = txnaFieldSpecByField[fs.field]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[1])
- }
- if fs.version > ops.Version {
- return ops.errorf("%s %#v available in version %d. Missed #pragma version?", spec.Name, args[1], fs.version)
+ fs, err := txnFieldImm(args[1], true, ops)
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
ops.pending.WriteByte(spec.Opcode)
@@ -992,16 +952,9 @@ func assembleGtxns(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.errorf("%s expects one or two immediate arguments", spec.Name)
}
- fs, ok := txnFieldSpecByName[args[0]]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
- }
- _, ok = txnaFieldSpecByField[fs.field]
- if ok {
- return ops.errorf("found array field %#v in gtxns op", args[0])
- }
- if fs.version > ops.Version {
- return ops.errorf("field %#v available in version %d. Missed #pragma version?", args[0], fs.version)
+ fs, err := txnFieldImm(args[0], false, ops)
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
ops.pending.WriteByte(spec.Opcode)
@@ -1014,23 +967,13 @@ func assembleGtxnsa(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 2 {
return ops.errorf("%s expects two immediate arguments", spec.Name)
}
- fs, ok := txnFieldSpecByName[args[0]]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
- }
- _, ok = txnaFieldSpecByField[fs.field]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
- }
- if fs.version > ops.Version {
- return ops.errorf("%s %#v available in version %d. Missed #pragma version?", spec.Name, args[0], fs.version)
- }
- arrayFieldIdx, err := strconv.ParseUint(args[1], 0, 64)
+ fs, err := txnFieldImm(args[0], true, ops)
if err != nil {
- return ops.error(err)
+ return ops.errorf("%s %w", spec.Name, err)
}
- if arrayFieldIdx > 255 {
- return ops.errorf("%s array index beyond 255: %d", spec.Name, arrayFieldIdx)
+ arrayFieldIdx, err := simpleImm(args[1], "array index")
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
ops.pending.WriteByte(spec.Opcode)
ops.pending.WriteByte(uint8(fs.field))
@@ -1043,16 +986,9 @@ func assembleGtxnsas(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.errorf("%s expects one immediate argument", spec.Name)
}
- fs, ok := txnFieldSpecByName[args[0]]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
- }
- _, ok = txnaFieldSpecByField[fs.field]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
- }
- if fs.version > ops.Version {
- return ops.errorf("%s %#v available in version %d. Missed #pragma version?", spec.Name, args[0], fs.version)
+ fs, err := txnFieldImm(args[0], true, ops)
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
ops.pending.WriteByte(spec.Opcode)
ops.pending.WriteByte(uint8(fs.field))
@@ -1076,17 +1012,11 @@ func asmItxnOnly(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.errorf("%s expects one argument", spec.Name)
}
- fs, ok := txnFieldSpecByName[args[0]]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
- }
- _, ok = txnaFieldSpecByField[fs.field]
- if ok {
- return ops.errorf("found array field %#v in %s op", args[0], spec.Name)
- }
- if fs.version > ops.Version {
- return ops.errorf("field %#v available in version %d. Missed #pragma version?", args[0], fs.version)
+ fs, err := txnFieldImm(args[0], false, ops)
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
+
ops.pending.WriteByte(spec.Opcode)
ops.pending.WriteByte(uint8(fs.field))
ops.returns(fs.ftype)
@@ -1097,26 +1027,74 @@ func asmItxna(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 2 {
return ops.errorf("%s expects two immediate arguments", spec.Name)
}
- fs, ok := txnFieldSpecByName[args[0]]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
+ fs, err := txnFieldImm(args[0], true, ops)
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
- _, ok = txnaFieldSpecByField[fs.field]
- if !ok {
- return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
+ arrayFieldIdx, err := simpleImm(args[1], "array index")
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
- if fs.version > ops.Version {
- return ops.errorf("%s %#v available in version %d. Missed #pragma version?", spec.Name, args[0], fs.version)
+
+ ops.pending.WriteByte(spec.Opcode)
+ ops.pending.WriteByte(uint8(fs.field))
+ ops.pending.WriteByte(uint8(arrayFieldIdx))
+ ops.returns(fs.ftype)
+ return nil
+}
+
+// asmGitxn delegates to asmGitxnOnly or asmGitxna depending on number of operands
+func asmGitxn(ops *OpStream, spec *OpSpec, args []string) error {
+ if len(args) == 2 {
+ return asmGitxnOnly(ops, spec, args)
+ }
+ if len(args) == 3 {
+ itxna := OpsByName[ops.Version]["gitxna"]
+ return asmGitxna(ops, &itxna, args)
+ }
+ return ops.errorf("%s expects two or three arguments", spec.Name)
+}
+
+func asmGitxnOnly(ops *OpStream, spec *OpSpec, args []string) error {
+ if len(args) != 2 {
+ return ops.errorf("%s expects two arguments", spec.Name)
}
- arrayFieldIdx, err := strconv.ParseUint(args[1], 0, 64)
+ slot, err := simpleImm(args[0], "transaction index")
if err != nil {
- return ops.error(err)
+ return ops.errorf("%s %w", spec.Name, err)
}
- if arrayFieldIdx > 255 {
- return ops.errorf("%s array index beyond 255: %d", spec.Name, arrayFieldIdx)
+ fs, err := txnFieldImm(args[1], false, ops)
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
}
ops.pending.WriteByte(spec.Opcode)
+ ops.pending.WriteByte(uint8(slot))
+ ops.pending.WriteByte(uint8(fs.field))
+ ops.returns(fs.ftype)
+ return nil
+}
+
+func asmGitxna(ops *OpStream, spec *OpSpec, args []string) error {
+ if len(args) != 3 {
+ return ops.errorf("%s expects three immediate arguments", spec.Name)
+ }
+ slot, err := simpleImm(args[0], "transaction index")
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
+ }
+
+ fs, err := txnFieldImm(args[1], true, ops)
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
+ }
+ arrayFieldIdx, err := simpleImm(args[2], "array index")
+ if err != nil {
+ return ops.errorf("%s %w", spec.Name, err)
+ }
+
+ ops.pending.WriteByte(spec.Opcode)
+ ops.pending.WriteByte(uint8(slot))
ops.pending.WriteByte(uint8(fs.field))
ops.pending.WriteByte(uint8(arrayFieldIdx))
ops.returns(fs.ftype)
@@ -1127,7 +1105,7 @@ func assembleGlobal(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.errorf("%s expects one argument", spec.Name)
}
- fs, ok := globalFieldSpecByName[args[0]]
+ fs, ok := GlobalFieldSpecByName[args[0]]
if !ok {
return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
}
@@ -1139,7 +1117,7 @@ func assembleGlobal(ops *OpStream, spec *OpSpec, args []string) error {
val := fs.field
ops.pending.WriteByte(spec.Opcode)
ops.pending.WriteByte(uint8(val))
- ops.trace("%s (%s)", fs.field.String(), fs.ftype.String())
+ ops.trace("%s (%s)", fs.field, fs.ftype)
ops.returns(fs.ftype)
return nil
}
@@ -1148,7 +1126,7 @@ func assembleAssetHolding(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.errorf("%s expects one argument", spec.Name)
}
- fs, ok := assetHoldingFieldSpecByName[args[0]]
+ fs, ok := AssetHoldingFieldSpecByName[args[0]]
if !ok {
return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
}
@@ -1160,7 +1138,7 @@ func assembleAssetHolding(ops *OpStream, spec *OpSpec, args []string) error {
val := fs.field
ops.pending.WriteByte(spec.Opcode)
ops.pending.WriteByte(uint8(val))
- ops.trace("%s (%s)", fs.field.String(), fs.ftype.String())
+ ops.trace("%s (%s)", fs.field, fs.ftype)
ops.returns(fs.ftype, StackUint64)
return nil
}
@@ -1169,7 +1147,7 @@ func assembleAssetParams(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.errorf("%s expects one argument", spec.Name)
}
- fs, ok := assetParamsFieldSpecByName[args[0]]
+ fs, ok := AssetParamsFieldSpecByName[args[0]]
if !ok {
return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
}
@@ -1181,7 +1159,7 @@ func assembleAssetParams(ops *OpStream, spec *OpSpec, args []string) error {
val := fs.field
ops.pending.WriteByte(spec.Opcode)
ops.pending.WriteByte(uint8(val))
- ops.trace("%s (%s)", fs.field.String(), fs.ftype.String())
+ ops.trace("%s (%s)", fs.field, fs.ftype)
ops.returns(fs.ftype, StackUint64)
return nil
}
@@ -1190,7 +1168,28 @@ func assembleAppParams(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.errorf("%s expects one argument", spec.Name)
}
- fs, ok := appParamsFieldSpecByName[args[0]]
+ fs, ok := AppParamsFieldSpecByName[args[0]]
+ if !ok {
+ return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
+ }
+ if fs.version > ops.Version {
+ //nolint:errcheck // we continue to maintain typestack
+ ops.errorf("%s %s available in version %d. Missed #pragma version?", spec.Name, args[0], fs.version)
+ }
+
+ val := fs.field
+ ops.pending.WriteByte(spec.Opcode)
+ ops.pending.WriteByte(uint8(val))
+ ops.trace("%s (%s)", fs.field, fs.ftype)
+ ops.returns(fs.ftype, StackUint64)
+ return nil
+}
+
+func assembleAcctParams(ops *OpStream, spec *OpSpec, args []string) error {
+ if len(args) != 1 {
+ return ops.errorf("%s expects one argument", spec.Name)
+ }
+ fs, ok := AcctParamsFieldSpecByName[args[0]]
if !ok {
return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
}
@@ -1202,7 +1201,7 @@ func assembleAppParams(ops *OpStream, spec *OpSpec, args []string) error {
val := fs.field
ops.pending.WriteByte(spec.Opcode)
ops.pending.WriteByte(uint8(val))
- ops.trace("%s (%s)", fs.field.String(), fs.ftype.String())
+ ops.trace("%s (%s)", fs.field, fs.ftype)
ops.returns(fs.ftype, StackUint64)
return nil
}
@@ -1211,13 +1210,15 @@ func asmTxField(ops *OpStream, spec *OpSpec, args []string) error {
if len(args) != 1 {
return ops.errorf("%s expects one argument", spec.Name)
}
- fs, ok := txnFieldSpecByName[args[0]]
+ fs, ok := TxnFieldSpecByName[args[0]]
if !ok {
- return ops.errorf("txn unknown field: %#v", args[0])
+ return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
+ }
+ if fs.itxVersion == 0 {
+ return ops.errorf("%s %#v is not allowed.", spec.Name, args[0])
}
- _, ok = txnaFieldSpecByField[fs.field]
- if ok {
- return ops.errorf("found array field %#v in %s op", args[0], spec.Name)
+ if fs.itxVersion > ops.Version {
+ return ops.errorf("%s %#v available in version %d. Missed #pragma version?", spec.Name, args[0], fs.itxVersion)
}
ops.pending.WriteByte(spec.Opcode)
ops.pending.WriteByte(uint8(fs.field))
@@ -1229,7 +1230,7 @@ func assembleEcdsa(ops *OpStream, spec *OpSpec, args []string) error {
return ops.errorf("%s expects one argument", spec.Name)
}
- cs, ok := ecdsaCurveSpecByName[args[0]]
+ cs, ok := EcdsaCurveSpecByName[args[0]]
if !ok {
return ops.errorf("%s unknown field: %#v", spec.Name, args[0])
}
@@ -1275,12 +1276,9 @@ func asmDefault(ops *OpStream, spec *OpSpec, args []string) error {
}
ops.pending.WriteByte(spec.Opcode)
for i := 0; i < spec.Details.Size-1; i++ {
- val, err := strconv.ParseUint(args[i], 0, 64)
+ val, err := simpleImm(args[i], "argument")
if err != nil {
- return ops.error(err)
- }
- if val > 255 {
- return ops.errorf("%s outside 0..255: %d", spec.Name, val)
+ return ops.errorf("%s %w", spec.Name, err)
}
ops.pending.WriteByte(byte(val))
}
@@ -1434,7 +1432,7 @@ func typeTxField(ops *OpStream, args []string) (StackTypes, StackTypes) {
if len(args) != 1 {
return oneAny, nil
}
- fs, ok := txnFieldSpecByName[args[0]]
+ fs, ok := TxnFieldSpecByName[args[0]]
if !ok {
return oneAny, nil
}
@@ -1735,7 +1733,7 @@ func (ops *OpStream) pragma(line string) error {
if err != nil {
return ops.errorf("bad #pragma version: %#v", value)
}
- if ver < 1 || ver > AssemblerMaxVersion {
+ if ver > AssemblerMaxVersion {
return ops.errorf("unsupported version: %d", ver)
}
@@ -2040,7 +2038,7 @@ func (ops *OpStream) optimizeConstants(refs []constReference, constBlock []inter
func (ops *OpStream) prependCBlocks() []byte {
var scratch [binary.MaxVarintLen64]byte
prebytes := bytes.Buffer{}
- vlen := binary.PutUvarint(scratch[:], ops.GetVersion())
+ vlen := binary.PutUvarint(scratch[:], ops.Version)
prebytes.Write(scratch[:vlen])
if len(ops.intc) > 0 && !ops.hasIntcBlock {
prebytes.WriteByte(0x20) // intcblock
@@ -2239,7 +2237,7 @@ func parseIntcblock(program []byte, pc int) (intc []uint64, nextpc int, err erro
pos := pc + 1
numInts, bytesUsed := binary.Uvarint(program[pos:])
if bytesUsed <= 0 {
- err = fmt.Errorf("could not decode int const block size at pc=%d", pos)
+ err = fmt.Errorf("could not decode intcblock size at pc=%d", pos)
return
}
pos += bytesUsed
@@ -2268,7 +2266,7 @@ func checkIntConstBlock(cx *EvalContext) error {
pos := cx.pc + 1
numInts, bytesUsed := binary.Uvarint(cx.program[pos:])
if bytesUsed <= 0 {
- return fmt.Errorf("could not decode int const block size at pc=%d", pos)
+ return fmt.Errorf("could not decode intcblock size at pc=%d", pos)
}
pos += bytesUsed
if numInts > uint64(len(cx.program)) {
@@ -2296,7 +2294,7 @@ func parseBytecBlock(program []byte, pc int) (bytec [][]byte, nextpc int, err er
pos := pc + 1
numItems, bytesUsed := binary.Uvarint(program[pos:])
if bytesUsed <= 0 {
- err = fmt.Errorf("could not decode []byte const block size at pc=%d", pos)
+ err = fmt.Errorf("could not decode bytecblock size at pc=%d", pos)
return
}
pos += bytesUsed
@@ -2336,7 +2334,7 @@ func checkByteConstBlock(cx *EvalContext) error {
pos := cx.pc + 1
numItems, bytesUsed := binary.Uvarint(cx.program[pos:])
if bytesUsed <= 0 {
- return fmt.Errorf("could not decode []byte const block size at pc=%d", pos)
+ return fmt.Errorf("could not decode bytecblock size at pc=%d", pos)
}
pos += bytesUsed
if numItems > uint64(len(cx.program)) {
@@ -2546,7 +2544,7 @@ func disTxna(dis *disassembleState, spec *OpSpec) (string, error) {
return fmt.Sprintf("%s %s %d", spec.Name, TxnFieldNames[txarg], arrayFieldIdx), nil
}
-// This is also used to disassemble gtxnas
+// disGtxn is also used to disassemble gtxnas, gitxn
func disGtxn(dis *disassembleState, spec *OpSpec) (string, error) {
lastIdx := dis.pc + 2
if len(dis.program) <= lastIdx {
@@ -2562,6 +2560,7 @@ func disGtxn(dis *disassembleState, spec *OpSpec) (string, error) {
return fmt.Sprintf("%s %d %s", spec.Name, gi, TxnFieldNames[txarg]), nil
}
+// disGtxna is also used to disassemble gitxna
func disGtxna(dis *disassembleState, spec *OpSpec) (string, error) {
lastIdx := dis.pc + 3
if len(dis.program) <= lastIdx {
@@ -2575,7 +2574,7 @@ func disGtxna(dis *disassembleState, spec *OpSpec) (string, error) {
return "", fmt.Errorf("invalid txn arg index %d at pc=%d", txarg, dis.pc)
}
arrayFieldIdx := dis.program[dis.pc+3]
- return fmt.Sprintf("gtxna %d %s %d", gi, TxnFieldNames[txarg], arrayFieldIdx), nil
+ return fmt.Sprintf("%s %d %s %d", spec.Name, gi, TxnFieldNames[txarg], arrayFieldIdx), nil
}
func disGlobal(dis *disassembleState, spec *OpSpec) (string, error) {
@@ -2662,6 +2661,20 @@ func disAppParams(dis *disassembleState, spec *OpSpec) (string, error) {
return fmt.Sprintf("%s %s", spec.Name, AppParamsFieldNames[arg]), nil
}
+func disAcctParams(dis *disassembleState, spec *OpSpec) (string, error) {
+ lastIdx := dis.pc + 1
+ if len(dis.program) <= lastIdx {
+ missing := lastIdx - len(dis.program) + 1
+ return "", fmt.Errorf("unexpected %s opcode end: missing %d bytes", spec.Name, missing)
+ }
+ dis.nextpc = dis.pc + 2
+ arg := dis.program[dis.pc+1]
+ if int(arg) >= len(AcctParamsFieldNames) {
+ return "", fmt.Errorf("invalid acct params arg index %d at pc=%d", arg, dis.pc)
+ }
+ return fmt.Sprintf("%s %s", spec.Name, AcctParamsFieldNames[arg]), nil
+}
+
func disTxField(dis *disassembleState, spec *OpSpec) (string, error) {
lastIdx := dis.pc + 1
if len(dis.program) <= lastIdx {
diff --git a/data/transactions/logic/assembler_test.go b/data/transactions/logic/assembler_test.go
index 0035d3f10..fd53d458b 100644
--- a/data/transactions/logic/assembler_test.go
+++ b/data/transactions/logic/assembler_test.go
@@ -344,7 +344,16 @@ itxna Logs 3
const v6Nonsense = v5Nonsense + `
itxn_next
+gitxn 4 CreatedAssetID
+gitxna 3 Logs 12
base64_decode URLEncoding
+int 0
+dup
+gloadss
+byte 0x0123456789abcd
+bsqrt
+txn Sender
+acct_params_get AcctBalance
`
var nonsense = map[uint64]string{
@@ -362,7 +371,7 @@ var compiled = map[uint64]string{
3: "032008b7a60cf8acd19181cf959a12f8acd19181cf951af8acd19181cf15f8acd191810f01020026050212340c68656c6c6f20776f726c6421208dae2087fbba51304eb02b91f656948397a7946390e8cb70fc9ea4d95f92251d024242047465737400320032013202320328292929292a0431003101310231043105310731083109310a310b310c310d310e310f3111311231133114311533000033000133000233000433000533000733000833000933000a33000b33000c33000d33000e33000f3300113300123300133300143300152d2e0102222324252104082209240a220b230c240d250e230f23102311231223132314181b1c2b171615400003290349483403350222231d4a484848482a50512a63222352410003420000432105602105612105270463484821052b62482b642b65484821052b2106662b21056721072b682b692107210570004848210771004848361c0037001a0031183119311b311d311e311f3120210721051e312131223123312431253126312731283129312a312b312c312d312e312f4478222105531421055427042106552105082106564c4d4b02210538212106391c0081e80780046a6f686e",
4: "042004010200b7a60c26040242420c68656c6c6f20776f726c6421208dae2087fbba51304eb02b91f656948397a7946390e8cb70fc9ea4d95f92251d047465737400320032013202320380021234292929292a0431003101310231043105310731083109310a310b310c310d310e310f3111311231133114311533000033000133000233000433000533000733000833000933000a33000b33000c33000d33000e33000f3300113300123300133300143300152d2e01022581f8acd19181cf959a1281f8acd19181cf951a81f8acd19181cf1581f8acd191810f082209240a220b230c240d250e230f23102311231223132314181b1c28171615400003290349483403350222231d4a484848482a50512a632223524100034200004322602261222b634848222862482864286548482228236628226724286828692422700048482471004848361c0037001a0031183119311b311d311e311f312024221e312131223123312431253126312731283129312a312b312c312d312e312f44782522531422542b2355220823564c4d4b0222382123391c0081e80780046a6f686e2281d00f24231f880003420001892223902291922394239593a0a1a2a3a4a5a6a7a8a9aaabacadae23af3a00003b003c003d8164",
5: "052004010002b7a60c26050242420c68656c6c6f20776f726c6421070123456789abcd208dae2087fbba51304eb02b91f656948397a7946390e8cb70fc9ea4d95f92251d047465737400320032013202320380021234292929292b0431003101310231043105310731083109310a310b310c310d310e310f3111311231133114311533000033000133000233000433000533000733000833000933000a33000b33000c33000d33000e33000f3300113300123300133300143300152d2e01022581f8acd19181cf959a1281f8acd19181cf951a81f8acd19181cf1581f8acd191810f082209240a220b230c240d250e230f23102311231223132314181b1c28171615400003290349483403350222231d4a484848482b50512a632223524100034200004322602261222704634848222862482864286548482228246628226723286828692322700048482371004848361c0037001a0031183119311b311d311e311f312023221e312131223123312431253126312731283129312a312b312c312d312e312f447825225314225427042455220824564c4d4b0222382124391c0081e80780046a6f686e2281d00f23241f880003420001892224902291922494249593a0a1a2a3a4a5a6a7a8a9aaabacadae24af3a00003b003c003d816472064e014f012a57000823810858235b235a2359b03139330039b1b200b322c01a23c1001a2323c21a23c3233e233f8120af06002a494905002a49490700b53a03",
- 6: "062004010002b7a60c26050242420c68656c6c6f20776f726c6421070123456789abcd208dae2087fbba51304eb02b91f656948397a7946390e8cb70fc9ea4d95f92251d047465737400320032013202320380021234292929292b0431003101310231043105310731083109310a310b310c310d310e310f3111311231133114311533000033000133000233000433000533000733000833000933000a33000b33000c33000d33000e33000f3300113300123300133300143300152d2e01022581f8acd19181cf959a1281f8acd19181cf951a81f8acd19181cf1581f8acd191810f082209240a220b230c240d250e230f23102311231223132314181b1c28171615400003290349483403350222231d4a484848482b50512a632223524100034200004322602261222704634848222862482864286548482228246628226723286828692322700048482371004848361c0037001a0031183119311b311d311e311f312023221e312131223123312431253126312731283129312a312b312c312d312e312f447825225314225427042455220824564c4d4b0222382124391c0081e80780046a6f686e2281d00f23241f880003420001892224902291922494249593a0a1a2a3a4a5a6a7a8a9aaabacadae24af3a00003b003c003d816472064e014f012a57000823810858235b235a2359b03139330039b1b200b322c01a23c1001a2323c21a23c3233e233f8120af06002a494905002a49490700b53a03b65c00",
+ 6: "062004010002b7a60c26050242420c68656c6c6f20776f726c6421070123456789abcd208dae2087fbba51304eb02b91f656948397a7946390e8cb70fc9ea4d95f92251d047465737400320032013202320380021234292929292b0431003101310231043105310731083109310a310b310c310d310e310f3111311231133114311533000033000133000233000433000533000733000833000933000a33000b33000c33000d33000e33000f3300113300123300133300143300152d2e01022581f8acd19181cf959a1281f8acd19181cf951a81f8acd19181cf1581f8acd191810f082209240a220b230c240d250e230f23102311231223132314181b1c28171615400003290349483403350222231d4a484848482b50512a632223524100034200004322602261222704634848222862482864286548482228246628226723286828692322700048482371004848361c0037001a0031183119311b311d311e311f312023221e312131223123312431253126312731283129312a312b312c312d312e312f447825225314225427042455220824564c4d4b0222382124391c0081e80780046a6f686e2281d00f23241f880003420001892224902291922494249593a0a1a2a3a4a5a6a7a8a9aaabacadae24af3a00003b003c003d816472064e014f012a57000823810858235b235a2359b03139330039b1b200b322c01a23c1001a2323c21a23c3233e233f8120af06002a494905002a49490700b53a03b6b7043cb8033a0c5c002349c42a9631007300",
}
func pseudoOp(opcode string) bool {
@@ -431,7 +440,7 @@ pop
require.Equal(t, ops1.Program, ops2.Program)
}
-type expect struct {
+type Expect struct {
l int
s string
}
@@ -449,7 +458,7 @@ func testMatch(t testing.TB, actual, expected string) {
}
}
-func testProg(t testing.TB, source string, ver uint64, expected ...expect) *OpStream {
+func testProg(t testing.TB, source string, ver uint64, expected ...Expect) *OpStream {
t.Helper()
program := strings.ReplaceAll(source, ";", "\n")
ops, err := AssembleStringWithVersion(program, ver)
@@ -463,7 +472,7 @@ func testProg(t testing.TB, source string, ver uint64, expected ...expect) *OpSt
require.NotNil(t, ops.Program)
// It should always be possible to Disassemble
dis, err := Disassemble(ops.Program)
- require.NoError(t, err)
+ require.NoError(t, err, program)
// And, while the disassembly may not match input
// exactly, the assembly of the disassembly should
// give the same bytecode
@@ -520,7 +529,7 @@ func testLine(t *testing.T, line string, ver uint64, expected string) {
testProg(t, source, ver)
return
}
- testProg(t, source, ver, expect{2, expected})
+ testProg(t, source, ver, Expect{2, expected})
}
func TestAssembleTxna(t *testing.T) {
@@ -528,30 +537,30 @@ func TestAssembleTxna(t *testing.T) {
testLine(t, "txna Accounts 256", AssemblerMaxVersion, "txna array index beyond 255: 256")
testLine(t, "txna ApplicationArgs 256", AssemblerMaxVersion, "txna array index beyond 255: 256")
- testLine(t, "txna Sender 256", AssemblerMaxVersion, "txna unknown field: \"Sender\"")
+ testLine(t, "txna Sender 256", AssemblerMaxVersion, "txna found scalar field \"Sender\"...")
testLine(t, "gtxna 0 Accounts 256", AssemblerMaxVersion, "gtxna array index beyond 255: 256")
testLine(t, "gtxna 0 ApplicationArgs 256", AssemblerMaxVersion, "gtxna array index beyond 255: 256")
- testLine(t, "gtxna 256 Accounts 0", AssemblerMaxVersion, "gtxna group index beyond 255: 256")
- testLine(t, "gtxna 0 Sender 256", AssemblerMaxVersion, "gtxna unknown field: \"Sender\"")
+ testLine(t, "gtxna 256 Accounts 0", AssemblerMaxVersion, "gtxna transaction index beyond 255: 256")
+ testLine(t, "gtxna 0 Sender 256", AssemblerMaxVersion, "gtxna found scalar field \"Sender\"...")
testLine(t, "txn Accounts 0", 1, "txn expects one argument")
testLine(t, "txn Accounts 0 1", 2, "txn expects one or two arguments")
testLine(t, "txna Accounts 0 1", AssemblerMaxVersion, "txna expects two immediate arguments")
testLine(t, "txnas Accounts 1", AssemblerMaxVersion, "txnas expects one immediate argument")
- testLine(t, "txna Accounts a", AssemblerMaxVersion, "strconv.ParseUint...")
+ testLine(t, "txna Accounts a", AssemblerMaxVersion, "txna unable to parse...")
testLine(t, "gtxn 0 Sender 0", 1, "gtxn expects two arguments")
testLine(t, "gtxn 0 Sender 1 2", 2, "gtxn expects two or three arguments")
testLine(t, "gtxna 0 Accounts 1 2", AssemblerMaxVersion, "gtxna expects three arguments")
- testLine(t, "gtxna a Accounts 0", AssemblerMaxVersion, "strconv.ParseUint...")
- testLine(t, "gtxna 0 Accounts a", AssemblerMaxVersion, "strconv.ParseUint...")
+ testLine(t, "gtxna a Accounts 0", AssemblerMaxVersion, "gtxna unable to parse...")
+ testLine(t, "gtxna 0 Accounts a", AssemblerMaxVersion, "gtxna unable to parse...")
testLine(t, "gtxnas Accounts 1 2", AssemblerMaxVersion, "gtxnas expects two immediate arguments")
testLine(t, "txn ABC", 2, "txn unknown field: \"ABC\"")
testLine(t, "gtxn 0 ABC", 2, "gtxn unknown field: \"ABC\"")
- testLine(t, "gtxn a ABC", 2, "strconv.ParseUint...")
- testLine(t, "txn Accounts", AssemblerMaxVersion, "found array field \"Accounts\" in txn op")
- testLine(t, "txn Accounts", 1, "found array field \"Accounts\" in txn op")
+ testLine(t, "gtxn a ABC", 2, "gtxn unable to parse...")
+ testLine(t, "txn Accounts", AssemblerMaxVersion, "txn found array field \"Accounts\"...")
+ testLine(t, "txn Accounts", 1, "txn found array field \"Accounts\"...")
testLine(t, "txn Accounts 0", AssemblerMaxVersion, "")
- testLine(t, "gtxn 0 Accounts", AssemblerMaxVersion, "found array field \"Accounts\" in gtxn op")
- testLine(t, "gtxn 0 Accounts", 1, "found array field \"Accounts\" in gtxn op")
+ testLine(t, "gtxn 0 Accounts", AssemblerMaxVersion, "gtxn found array field \"Accounts\"...")
+ testLine(t, "gtxn 0 Accounts", 1, "gtxn found array field \"Accounts\"...")
testLine(t, "gtxn 0 Accounts 1", AssemblerMaxVersion, "")
}
@@ -570,7 +579,7 @@ int 1
+
// comment
`
- testProg(t, source, AssemblerMaxVersion, expect{3, "+ arg 0 wanted type uint64 got []byte"})
+ testProg(t, source, AssemblerMaxVersion, Expect{3, "+ arg 0 wanted type uint64 got []byte"})
}
// mutateProgVersion replaces version (first two symbols) in hex-encoded program
@@ -709,10 +718,10 @@ func TestAssembleBytes(t *testing.T) {
}
for _, b := range bad {
- testProg(t, b[0], v, expect{1, b[1]})
+ testProg(t, b[0], v, Expect{1, b[1]})
// pushbytes should produce the same errors
if v >= 3 {
- testProg(t, strings.Replace(b[0], "byte", "pushbytes", 1), v, expect{1, b[1]})
+ testProg(t, strings.Replace(b[0], "byte", "pushbytes", 1), v, Expect{1, b[1]})
}
}
})
@@ -1181,7 +1190,7 @@ bnz wat
int 2`
for v := uint64(1); v < backBranchEnabledVersion; v++ {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
- testProg(t, source, v, expect{3, "label \"wat\" is a back reference..."})
+ testProg(t, source, v, Expect{3, "label \"wat\" is a back reference..."})
})
}
for v := uint64(backBranchEnabledVersion); v <= AssemblerMaxVersion; v++ {
@@ -1235,7 +1244,7 @@ bnz nowhere
int 2`
for v := uint64(1); v <= AssemblerMaxVersion; v++ {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
- testProg(t, source, v, expect{2, "reference to undefined label \"nowhere\""})
+ testProg(t, source, v, Expect{2, "reference to undefined label \"nowhere\""})
})
}
}
@@ -1268,8 +1277,8 @@ int 2`
for v := uint64(1); v <= AssemblerMaxVersion; v++ {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
testProg(t, source, v,
- expect{2, "reference to undefined label \"nowhere\""},
- expect{4, "txn unknown field: \"XYZ\""})
+ Expect{2, "reference to undefined label \"nowhere\""},
+ Expect{4, "txn unknown field: \"XYZ\""})
})
}
}
@@ -1314,6 +1323,9 @@ global LatestTimestamp
global CurrentApplicationID
global CreatorAddress
global GroupID
+global OpcodeBudget
+global CallerApplicationID
+global CallerApplicationAddress
txn Sender
txn Fee
bnz label1
@@ -1463,15 +1475,15 @@ func TestConstantArgs(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
for v := uint64(1); v <= AssemblerMaxVersion; v++ {
- testProg(t, "int", v, expect{1, "int needs one argument"})
- testProg(t, "intc", v, expect{1, "intc operation needs one argument"})
- testProg(t, "byte", v, expect{1, "byte operation needs byte literal argument"})
- testProg(t, "bytec", v, expect{1, "bytec operation needs one argument"})
- testProg(t, "addr", v, expect{1, "addr operation needs one argument"})
+ testProg(t, "int", v, Expect{1, "int needs one argument"})
+ testProg(t, "intc", v, Expect{1, "intc operation needs one argument"})
+ testProg(t, "byte", v, Expect{1, "byte operation needs byte literal argument"})
+ testProg(t, "bytec", v, Expect{1, "bytec operation needs one argument"})
+ testProg(t, "addr", v, Expect{1, "addr operation needs one argument"})
}
for v := uint64(3); v <= AssemblerMaxVersion; v++ {
- testProg(t, "pushint", v, expect{1, "pushint needs one argument"})
- testProg(t, "pushbytes", v, expect{1, "pushbytes operation needs byte literal argument"})
+ testProg(t, "pushint", v, Expect{1, "pushint needs one argument"})
+ testProg(t, "pushbytes", v, Expect{1, "pushbytes operation needs byte literal argument"})
}
}
@@ -1608,7 +1620,7 @@ balance
int 1
==`
for v := uint64(2); v < directRefEnabledVersion; v++ {
- testProg(t, source, v, expect{2, "balance arg 0 wanted type uint64 got []byte"})
+ testProg(t, source, v, Expect{2, "balance arg 0 wanted type uint64 got []byte"})
}
for v := uint64(directRefEnabledVersion); v <= AssemblerMaxVersion; v++ {
testProg(t, source, v)
@@ -1625,7 +1637,7 @@ min_balance
int 1
==`
for v := uint64(3); v < directRefEnabledVersion; v++ {
- testProg(t, source, v, expect{2, "min_balance arg 0 wanted type uint64 got []byte"})
+ testProg(t, source, v, Expect{2, "min_balance arg 0 wanted type uint64 got []byte"})
}
for v := uint64(directRefEnabledVersion); v <= AssemblerMaxVersion; v++ {
testProg(t, source, v)
@@ -1639,16 +1651,16 @@ func TestAssembleAsset(t *testing.T) {
introduction := OpsByName[LogicVersion]["asset_holding_get"].Version
for v := introduction; v <= AssemblerMaxVersion; v++ {
testProg(t, "asset_holding_get ABC 1", v,
- expect{1, "asset_holding_get ABC 1 expects 2 stack arguments..."})
+ Expect{1, "asset_holding_get ABC 1 expects 2 stack arguments..."})
testProg(t, "int 1; asset_holding_get ABC 1", v,
- expect{2, "asset_holding_get ABC 1 expects 2 stack arguments..."})
+ Expect{2, "asset_holding_get ABC 1 expects 2 stack arguments..."})
testProg(t, "int 1; int 1; asset_holding_get ABC 1", v,
- expect{3, "asset_holding_get expects one argument"})
+ Expect{3, "asset_holding_get expects one argument"})
testProg(t, "int 1; int 1; asset_holding_get ABC", v,
- expect{3, "asset_holding_get unknown field: \"ABC\""})
+ Expect{3, "asset_holding_get unknown field: \"ABC\""})
testProg(t, "byte 0x1234; asset_params_get ABC 1", v,
- expect{2, "asset_params_get ABC 1 arg 0 wanted type uint64..."})
+ Expect{2, "asset_params_get ABC 1 arg 0 wanted type uint64..."})
testLine(t, "asset_params_get ABC 1", v, "asset_params_get expects one argument")
testLine(t, "asset_params_get ABC", v, "asset_params_get unknown field: \"ABC\"")
@@ -2032,15 +2044,15 @@ func TestPragmas(t *testing.T) {
}
testProg(t, `#pragma version 100`, assemblerNoVersion,
- expect{1, "unsupported version: 100"})
+ Expect{1, "unsupported version: 100"})
- testProg(t, `int 1`, 99, expect{0, "Can not assemble version 99"})
+ testProg(t, `int 1`, 99, Expect{0, "Can not assemble version 99"})
- testProg(t, `#pragma version 0`, assemblerNoVersion,
- expect{1, "unsupported version: 0"})
+ // Allow this on the off chance someone needs to reassemble an old logigsig
+ testProg(t, `#pragma version 0`, assemblerNoVersion)
testProg(t, `#pragma version a`, assemblerNoVersion,
- expect{1, `bad #pragma version: "a"`})
+ Expect{1, `bad #pragma version: "a"`})
// will default to 1
ops := testProg(t, "int 3", assemblerNoVersion)
@@ -2054,24 +2066,24 @@ func TestPragmas(t *testing.T) {
require.Equal(t, uint64(2), ops.Version)
// changing version is not allowed
- testProg(t, "#pragma version 1", 2, expect{1, "version mismatch..."})
- testProg(t, "#pragma version 2", 1, expect{1, "version mismatch..."})
+ testProg(t, "#pragma version 1", 2, Expect{1, "version mismatch..."})
+ testProg(t, "#pragma version 2", 1, Expect{1, "version mismatch..."})
testProg(t, "#pragma version 2\n#pragma version 1", assemblerNoVersion,
- expect{2, "version mismatch..."})
+ Expect{2, "version mismatch..."})
// repetitive, but fine
ops = testProg(t, "#pragma version 2\n#pragma version 2", assemblerNoVersion)
require.Equal(t, uint64(2), ops.Version)
testProg(t, "\nint 1\n#pragma version 2", assemblerNoVersion,
- expect{3, "#pragma version is only allowed before instructions"})
+ Expect{3, "#pragma version is only allowed before instructions"})
testProg(t, "#pragma run-mode 2", assemblerNoVersion,
- expect{1, `unsupported pragma directive: "run-mode"`})
+ Expect{1, `unsupported pragma directive: "run-mode"`})
testProg(t, "#pragma versions", assemblerNoVersion,
- expect{1, `unsupported pragma directive: "versions"`})
+ Expect{1, `unsupported pragma directive: "versions"`})
ops = testProg(t, "#pragma version 1", assemblerNoVersion)
require.Equal(t, uint64(1), ops.Version)
@@ -2079,10 +2091,10 @@ func TestPragmas(t *testing.T) {
ops = testProg(t, "\n#pragma version 1", assemblerNoVersion)
require.Equal(t, uint64(1), ops.Version)
- testProg(t, "#pragma", assemblerNoVersion, expect{1, "empty pragma"})
+ testProg(t, "#pragma", assemblerNoVersion, Expect{1, "empty pragma"})
testProg(t, "#pragma version", assemblerNoVersion,
- expect{1, "no version value"})
+ Expect{1, "no version value"})
ops = testProg(t, " #pragma version 5 ", assemblerNoVersion)
require.Equal(t, uint64(5), ops.Version)
@@ -2101,8 +2113,8 @@ int 1
require.NoError(t, err)
require.Equal(t, ops1.Program, ops.Program)
- testProg(t, text, 0, expect{1, "version mismatch..."})
- testProg(t, text, 2, expect{1, "version mismatch..."})
+ testProg(t, text, 0, Expect{1, "version mismatch..."})
+ testProg(t, text, 2, Expect{1, "version mismatch..."})
testProg(t, text, assemblerNoVersion)
ops, err = AssembleStringWithVersion(text, assemblerNoVersion)
@@ -2118,8 +2130,8 @@ int 1
require.NoError(t, err)
require.Equal(t, ops2.Program, ops.Program)
- testProg(t, text, 0, expect{1, "version mismatch..."})
- testProg(t, text, 1, expect{1, "version mismatch..."})
+ testProg(t, text, 0, Expect{1, "version mismatch..."})
+ testProg(t, text, 1, Expect{1, "version mismatch..."})
ops, err = AssembleStringWithVersion(text, assemblerNoVersion)
require.NoError(t, err)
@@ -2139,7 +2151,7 @@ len
require.Equal(t, ops2.Program, ops.Program)
testProg(t, "#pragma unk", assemblerNoVersion,
- expect{1, `unsupported pragma directive: "unk"`})
+ Expect{1, `unsupported pragma directive: "unk"`})
}
func TestAssembleConstants(t *testing.T) {
@@ -2211,29 +2223,29 @@ func TestSwapTypeCheck(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
/* reconfirm that we detect this type error */
- testProg(t, "int 1; byte 0x1234; +", AssemblerMaxVersion, expect{3, "+ arg 1..."})
+ testProg(t, "int 1; byte 0x1234; +", AssemblerMaxVersion, Expect{3, "+ arg 1..."})
/* despite swap, we track types */
- testProg(t, "int 1; byte 0x1234; swap; +", AssemblerMaxVersion, expect{4, "+ arg 0..."})
- testProg(t, "byte 0x1234; int 1; swap; +", AssemblerMaxVersion, expect{4, "+ arg 1..."})
+ testProg(t, "int 1; byte 0x1234; swap; +", AssemblerMaxVersion, Expect{4, "+ arg 0..."})
+ testProg(t, "byte 0x1234; int 1; swap; +", AssemblerMaxVersion, Expect{4, "+ arg 1..."})
}
func TestDigAsm(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- testProg(t, "int 1; dig; +", AssemblerMaxVersion, expect{2, "dig expects 1 immediate..."})
- testProg(t, "int 1; dig junk; +", AssemblerMaxVersion, expect{2, "...invalid syntax..."})
+ testProg(t, "int 1; dig; +", AssemblerMaxVersion, Expect{2, "dig expects 1 immediate..."})
+ testProg(t, "int 1; dig junk; +", AssemblerMaxVersion, Expect{2, "dig unable to parse..."})
testProg(t, "int 1; byte 0x1234; int 2; dig 2; +", AssemblerMaxVersion)
testProg(t, "byte 0x32; byte 0x1234; int 2; dig 2; +", AssemblerMaxVersion,
- expect{5, "+ arg 1..."})
+ Expect{5, "+ arg 1..."})
testProg(t, "byte 0x32; byte 0x1234; int 2; dig 3; +", AssemblerMaxVersion,
- expect{4, "dig 3 expects 4..."})
+ Expect{4, "dig 3 expects 4..."})
testProg(t, "int 1; byte 0x1234; int 2; dig 12; +", AssemblerMaxVersion,
- expect{4, "dig 12 expects 13..."})
+ Expect{4, "dig 12 expects 13..."})
// Confirm that digging something out does not ruin our knowledge about the types in the middle
testProg(t, "int 1; byte 0x1234; byte 0x1234; dig 2; dig 3; +; pop; +", AssemblerMaxVersion,
- expect{8, "+ arg 1..."})
+ Expect{8, "+ arg 1..."})
testProg(t, "int 3; pushbytes \"123456\"; int 1; dig 2; substring3", AssemblerMaxVersion)
}
@@ -2241,39 +2253,39 @@ func TestDigAsm(t *testing.T) {
func TestEqualsTypeCheck(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- testProg(t, "int 1; byte 0x1234; ==", AssemblerMaxVersion, expect{3, "== arg 0..."})
- testProg(t, "int 1; byte 0x1234; !=", AssemblerMaxVersion, expect{3, "!= arg 0..."})
- testProg(t, "byte 0x1234; int 1; ==", AssemblerMaxVersion, expect{3, "== arg 0..."})
- testProg(t, "byte 0x1234; int 1; !=", AssemblerMaxVersion, expect{3, "!= arg 0..."})
+ testProg(t, "int 1; byte 0x1234; ==", AssemblerMaxVersion, Expect{3, "== arg 0..."})
+ testProg(t, "int 1; byte 0x1234; !=", AssemblerMaxVersion, Expect{3, "!= arg 0..."})
+ testProg(t, "byte 0x1234; int 1; ==", AssemblerMaxVersion, Expect{3, "== arg 0..."})
+ testProg(t, "byte 0x1234; int 1; !=", AssemblerMaxVersion, Expect{3, "!= arg 0..."})
}
func TestDupTypeCheck(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- testProg(t, "byte 0x1234; dup; int 1; +", AssemblerMaxVersion, expect{4, "+ arg 0..."})
+ testProg(t, "byte 0x1234; dup; int 1; +", AssemblerMaxVersion, Expect{4, "+ arg 0..."})
testProg(t, "byte 0x1234; int 1; dup; +", AssemblerMaxVersion)
- testProg(t, "byte 0x1234; int 1; dup2; +", AssemblerMaxVersion, expect{4, "+ arg 0..."})
- testProg(t, "int 1; byte 0x1234; dup2; +", AssemblerMaxVersion, expect{4, "+ arg 1..."})
+ testProg(t, "byte 0x1234; int 1; dup2; +", AssemblerMaxVersion, Expect{4, "+ arg 0..."})
+ testProg(t, "int 1; byte 0x1234; dup2; +", AssemblerMaxVersion, Expect{4, "+ arg 1..."})
- testProg(t, "byte 0x1234; int 1; dup; dig 1; len", AssemblerMaxVersion, expect{5, "len arg 0..."})
- testProg(t, "int 1; byte 0x1234; dup; dig 1; !", AssemblerMaxVersion, expect{5, "! arg 0..."})
+ testProg(t, "byte 0x1234; int 1; dup; dig 1; len", AssemblerMaxVersion, Expect{5, "len arg 0..."})
+ testProg(t, "int 1; byte 0x1234; dup; dig 1; !", AssemblerMaxVersion, Expect{5, "! arg 0..."})
- testProg(t, "byte 0x1234; int 1; dup2; dig 2; len", AssemblerMaxVersion, expect{5, "len arg 0..."})
- testProg(t, "int 1; byte 0x1234; dup2; dig 2; !", AssemblerMaxVersion, expect{5, "! arg 0..."})
+ testProg(t, "byte 0x1234; int 1; dup2; dig 2; len", AssemblerMaxVersion, Expect{5, "len arg 0..."})
+ testProg(t, "int 1; byte 0x1234; dup2; dig 2; !", AssemblerMaxVersion, Expect{5, "! arg 0..."})
}
func TestSelectTypeCheck(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- testProg(t, "int 1; int 2; int 3; select; len", AssemblerMaxVersion, expect{5, "len arg 0..."})
- testProg(t, "byte 0x1234; byte 0x5678; int 3; select; !", AssemblerMaxVersion, expect{5, "! arg 0..."})
+ testProg(t, "int 1; int 2; int 3; select; len", AssemblerMaxVersion, Expect{5, "len arg 0..."})
+ testProg(t, "byte 0x1234; byte 0x5678; int 3; select; !", AssemblerMaxVersion, Expect{5, "! arg 0..."})
}
func TestSetBitTypeCheck(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- testProg(t, "int 1; int 2; int 3; setbit; len", AssemblerMaxVersion, expect{5, "len arg 0..."})
- testProg(t, "byte 0x1234; int 2; int 3; setbit; !", AssemblerMaxVersion, expect{5, "! arg 0..."})
+ testProg(t, "int 1; int 2; int 3; setbit; len", AssemblerMaxVersion, Expect{5, "len arg 0..."})
+ testProg(t, "byte 0x1234; int 2; int 3; setbit; !", AssemblerMaxVersion, Expect{5, "! arg 0..."})
}
func TestCoverAsm(t *testing.T) {
@@ -2281,7 +2293,7 @@ func TestCoverAsm(t *testing.T) {
t.Parallel()
testProg(t, `int 4; byte "john"; int 5; cover 2; pop; +`, AssemblerMaxVersion)
testProg(t, `int 4; byte "ayush"; int 5; cover 1; pop; +`, AssemblerMaxVersion)
- testProg(t, `int 4; byte "john"; int 5; cover 2; +`, AssemblerMaxVersion, expect{5, "+ arg 1..."})
+ testProg(t, `int 4; byte "john"; int 5; cover 2; +`, AssemblerMaxVersion, Expect{5, "+ arg 1..."})
}
@@ -2291,15 +2303,15 @@ func TestUncoverAsm(t *testing.T) {
testProg(t, `int 4; byte "john"; int 5; uncover 2; +`, AssemblerMaxVersion)
testProg(t, `int 4; byte "ayush"; int 5; uncover 1; pop; +`, AssemblerMaxVersion)
testProg(t, `int 1; byte "jj"; byte "ayush"; byte "john"; int 5; uncover 4; +`, AssemblerMaxVersion)
- testProg(t, `int 4; byte "ayush"; int 5; uncover 1; +`, AssemblerMaxVersion, expect{5, "+ arg 1..."})
+ testProg(t, `int 4; byte "ayush"; int 5; uncover 1; +`, AssemblerMaxVersion, Expect{5, "+ arg 1..."})
}
func TestTxTypes(t *testing.T) {
- testProg(t, "itxn_begin; itxn_field Sender", 5, expect{2, "itxn_field Sender expects 1 stack argument..."})
- testProg(t, "itxn_begin; int 1; itxn_field Sender", 5, expect{3, "...wanted type []byte got uint64"})
+ testProg(t, "itxn_begin; itxn_field Sender", 5, Expect{2, "itxn_field Sender expects 1 stack argument..."})
+ testProg(t, "itxn_begin; int 1; itxn_field Sender", 5, Expect{3, "...wanted type []byte got uint64"})
testProg(t, "itxn_begin; byte 0x56127823; itxn_field Sender", 5)
- testProg(t, "itxn_begin; itxn_field Amount", 5, expect{2, "itxn_field Amount expects 1 stack argument..."})
- testProg(t, "itxn_begin; byte 0x87123376; itxn_field Amount", 5, expect{3, "...wanted type uint64 got []byte"})
+ testProg(t, "itxn_begin; itxn_field Amount", 5, Expect{2, "itxn_field Amount expects 1 stack argument..."})
+ testProg(t, "itxn_begin; byte 0x87123376; itxn_field Amount", 5, Expect{3, "...wanted type uint64 got []byte"})
testProg(t, "itxn_begin; int 1; itxn_field Amount", 5)
}
diff --git a/data/transactions/logic/backwardCompat_test.go b/data/transactions/logic/backwardCompat_test.go
index b56531cc0..21985d749 100644
--- a/data/transactions/logic/backwardCompat_test.go
+++ b/data/transactions/logic/backwardCompat_test.go
@@ -22,11 +22,8 @@ import (
"strings"
"testing"
- "github.com/algorand/go-algorand/config"
"github.com/algorand/go-algorand/crypto"
"github.com/algorand/go-algorand/data/basics"
- "github.com/algorand/go-algorand/data/transactions/logictest"
- "github.com/algorand/go-algorand/protocol"
"github.com/algorand/go-algorand/test/partitiontest"
"github.com/stretchr/testify/require"
)
@@ -280,98 +277,71 @@ func TestBackwardCompatTEALv1(t *testing.T) {
Data: data[:],
})
- txn := makeSampleTxn()
+ ep, tx, _ := makeSampleEnvWithVersion(1)
// RekeyTo disallowed on TEAL v0/v1
- txn.Txn.RekeyTo = basics.Address{}
- txgroup := makeSampleTxnGroup(txn)
- txn.Lsig.Logic = program
- txn.Lsig.Args = [][]byte{data[:], sig[:], pk[:], txn.Txn.Sender[:], txn.Txn.Note}
- txn.Txn.RekeyTo = basics.Address{} // RekeyTo not allowed in TEAL v1
+ tx.RekeyTo = basics.Address{}
- sb := strings.Builder{}
- ep := defaultEvalParamsWithVersion(&sb, &txn, 1)
- ep.TxnGroup = txgroup
+ ep.TxnGroup[0].Lsig.Logic = program
+ ep.TxnGroup[0].Lsig.Args = [][]byte{data[:], sig[:], pk[:], tx.Sender[:], tx.Note}
// ensure v1 program runs well on latest TEAL evaluator
require.Equal(t, uint8(1), program[0])
// Cost should stay exactly 2140
ep.Proto.LogicSigMaxCost = 2139
- err = Check(program, ep)
+ err = CheckSignature(0, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "static cost")
ep.Proto.LogicSigMaxCost = 2140
- err = Check(program, ep)
+ err = CheckSignature(0, ep)
require.NoError(t, err)
- pass, err := Eval(program, ep)
+ pass, err := EvalSignature(0, ep)
if err != nil || !pass {
t.Log(hex.EncodeToString(program))
- t.Log(sb.String())
+ t.Log(ep.Trace.String())
}
require.NoError(t, err)
require.True(t, pass)
// Costs for v2 should be higher because of hash opcode cost changes
- ep2 := defaultEvalParamsWithVersion(&sb, &txn, 2)
- ep2.TxnGroup = txgroup
+ ep2, tx, _ := makeSampleEnvWithVersion(2)
ep2.Proto.LogicSigMaxCost = 2307
- err = Check(opsV2.Program, ep2)
- require.Error(t, err)
- require.Contains(t, err.Error(), "static cost")
+ // ep2.TxnGroup[0].Lsig.Logic = opsV2.Program
+ ep2.TxnGroup[0].Lsig.Args = [][]byte{data[:], sig[:], pk[:], tx.Sender[:], tx.Note}
+ // Eval doesn't fail, but it would be ok (better?) if it did
+ testLogicBytes(t, opsV2.Program, ep2, "static cost", "")
ep2.Proto.LogicSigMaxCost = 2308
- err = Check(opsV2.Program, ep2)
- require.NoError(t, err)
-
- pass, err = Eval(opsV2.Program, ep2)
- if err != nil || !pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.NoError(t, err)
- require.True(t, pass)
+ testLogicBytes(t, opsV2.Program, ep2)
// ensure v0 program runs well on latest TEAL evaluator
- ep = defaultEvalParams(&sb, &txn)
- ep.TxnGroup = txgroup
+ ep, tx, _ = makeSampleEnv()
program[0] = 0
sig = c.Sign(Msg{
ProgramHash: crypto.HashObj(Program(program)),
Data: data[:],
})
- txn.Lsig.Logic = program
- txn.Lsig.Args = [][]byte{data[:], sig[:], pk[:], txn.Txn.Sender[:], txn.Txn.Note}
+ ep.TxnGroup[0].Lsig.Logic = program
+ ep.TxnGroup[0].Lsig.Args = [][]byte{data[:], sig[:], pk[:], tx.Sender[:], tx.Note}
// Cost remains the same, because v0 does not get dynamic treatment
ep.Proto.LogicSigMaxCost = 2139
- err = Check(program, ep)
- require.Error(t, err)
+ ep.MinTealVersion = new(uint64) // Was higher because sample txn has a rekey
+ testLogicBytes(t, program, ep, "static cost", "")
ep.Proto.LogicSigMaxCost = 2140
- err = Check(program, ep)
- require.NoError(t, err)
- pass, err = Eval(program, ep)
- require.NoError(t, err)
- require.True(t, pass)
+ testLogicBytes(t, program, ep)
// But in v4, cost is now dynamic and exactly 1 less than v2/v3,
// because bnz skips "err". It's caught during Eval
program[0] = 4
ep.Proto.LogicSigMaxCost = 2306
- err = Check(program, ep)
- require.NoError(t, err)
- _, err = Eval(program, ep)
- require.Error(t, err)
+ testLogicBytes(t, program, ep, "dynamic cost")
ep.Proto.LogicSigMaxCost = 2307
- err = Check(program, ep)
- require.NoError(t, err)
- pass, err = Eval(program, ep)
- require.NoError(t, err)
- require.True(t, pass)
-
+ testLogicBytes(t, program, ep)
}
// ensure v2 fields error on pre TEAL v2 logicsig version
@@ -388,7 +358,6 @@ func TestBackwardCompatGlobalFields(t *testing.T) {
}
require.Greater(t, len(fields), 1)
- ledger := logictest.MakeLedger(nil)
for _, field := range fields {
text := fmt.Sprintf("global %s", field.field.String())
// check assembler fails if version before introduction
@@ -399,35 +368,22 @@ func TestBackwardCompatGlobalFields(t *testing.T) {
ops := testProg(t, text, AssemblerMaxVersion)
- proto := config.Consensus[protocol.ConsensusV23]
- require.False(t, proto.Application)
- ep := defaultEvalParams(nil, nil)
- ep.Proto = &proto
- ep.Ledger = ledger
-
- // check failure with version check
- _, err := Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "greater than protocol supported version")
- _, err = Eval(ops.Program, ep)
+ ep, _, _ := makeSampleEnvWithVersion(1)
+ ep.TxnGroup[0].Txn.RekeyTo = basics.Address{} // avoid min teal version issues
+ ep.TxnGroup[0].Lsig.Logic = ops.Program
+ _, err := EvalSignature(0, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "greater than protocol supported version")
// check opcodes failures
- ops.Program[0] = 1 // set version to 1
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid global field")
- _, err = Eval(ops.Program, ep)
+ ep.TxnGroup[0].Lsig.Logic[0] = 1 // set version to 1
+ _, err = EvalSignature(0, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "invalid global field")
// check opcodes failures
- ops.Program[0] = 0 // set version to 0
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid global field")
- _, err = Eval(ops.Program, ep)
+ ep.TxnGroup[0].Lsig.Logic[0] = 0 // set version to 0
+ _, err = EvalSignature(0, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "invalid global field")
}
@@ -451,22 +407,15 @@ func TestBackwardCompatTxnFields(t *testing.T) {
"gtxn 0 %s",
}
- ledger := logictest.MakeLedger(nil)
- txn := makeSampleTxn()
- // We'll reject too early if we have a nonzero RekeyTo, because that
- // field must be zero for every txn in the group if this is an old
- // TEAL version
- txn.Txn.RekeyTo = basics.Address{}
- txgroup := makeSampleTxnGroup(txn)
for _, fs := range fields {
field := fs.field.String()
for _, command := range tests {
text := fmt.Sprintf(command, field)
asmError := "...available in version ..."
- if _, ok := txnaFieldSpecByField[fs.field]; ok {
+ if fs.array {
parts := strings.Split(text, " ")
op := parts[0]
- asmError = fmt.Sprintf("found array field %#v in %s op", field, op)
+ asmError = fmt.Sprintf("%s found array field %#v while expecting scalar", op, field)
}
// check assembler fails if version before introduction
testLine(t, text, assemblerNoVersion, asmError)
@@ -475,7 +424,7 @@ func TestBackwardCompatTxnFields(t *testing.T) {
}
ops, err := AssembleStringWithVersion(text, AssemblerMaxVersion)
- if _, ok := txnaFieldSpecByField[fs.field]; ok {
+ if fs.array {
// "txn Accounts" is invalid, so skip evaluation
require.Error(t, err, asmError)
continue
@@ -483,36 +432,27 @@ func TestBackwardCompatTxnFields(t *testing.T) {
require.NoError(t, err)
}
- proto := config.Consensus[protocol.ConsensusV23]
- require.False(t, proto.Application)
- ep := defaultEvalParams(nil, nil)
- ep.Proto = &proto
- ep.Ledger = ledger
- ep.TxnGroup = txgroup
+ ep, tx, _ := makeSampleEnvWithVersion(1)
+ // We'll reject too early if we have a nonzero RekeyTo, because that
+ // field must be zero for every txn in the group if this is an old
+ // TEAL version
+ tx.RekeyTo = basics.Address{}
+ ep.TxnGroup[0].Lsig.Logic = ops.Program
// check failure with version check
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "greater than protocol supported version")
- _, err = Eval(ops.Program, ep)
+ _, err = EvalSignature(0, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "greater than protocol supported version")
// check opcodes failures
ops.Program[0] = 1 // set version to 1
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid txn field")
- _, err = Eval(ops.Program, ep)
+ _, err = EvalSignature(0, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "invalid txn field")
// check opcodes failures
ops.Program[0] = 0 // set version to 0
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid txn field")
- _, err = Eval(ops.Program, ep)
+ _, err = EvalSignature(0, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "invalid txn field")
}
@@ -525,30 +465,23 @@ func TestBackwardCompatAssemble(t *testing.T) {
// TEAL v1 does not allow branching to the last line
// TEAL v2 makes such programs legal
t.Parallel()
- source := `int 0
-int 1
-bnz done
-done:`
+ source := "int 1; int 1; bnz done; done:"
t.Run("v=default", func(t *testing.T) {
- testProg(t, source, assemblerNoVersion, expect{4, "label \"done\" is too far away"})
+ testProg(t, source, assemblerNoVersion, Expect{4, "label \"done\" is too far away"})
})
t.Run("v=default", func(t *testing.T) {
- testProg(t, source, 0, expect{4, "label \"done\" is too far away"})
+ testProg(t, source, 0, Expect{4, "label \"done\" is too far away"})
})
t.Run("v=default", func(t *testing.T) {
- testProg(t, source, 1, expect{4, "label \"done\" is too far away"})
+ testProg(t, source, 1, Expect{4, "label \"done\" is too far away"})
})
for v := uint64(2); v <= AssemblerMaxVersion; v++ {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
- ops, err := AssembleStringWithVersion(source, v)
- require.NoError(t, err)
- ep := defaultEvalParams(nil, nil)
- _, err = Eval(ops.Program, ep)
- require.NoError(t, err)
+ testLogic(t, source, v, defaultEvalParams(nil))
})
}
}
diff --git a/data/transactions/logic/blackbox_test.go b/data/transactions/logic/blackbox_test.go
new file mode 100644
index 000000000..dc6fdd968
--- /dev/null
+++ b/data/transactions/logic/blackbox_test.go
@@ -0,0 +1,95 @@
+// Copyright (C) 2019-2022 Algorand, Inc.
+// This file is part of go-algorand
+//
+// go-algorand is free software: you can redistribute it and/or modify
+// it under the terms of the GNU Affero General Public License as
+// published by the Free Software Foundation, either version 3 of the
+// License, or (at your option) any later version.
+//
+// go-algorand is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU Affero General Public License for more details.
+//
+// You should have received a copy of the GNU Affero General Public License
+// along with go-algorand. If not, see <https://www.gnu.org/licenses/>.
+
+package logic_test
+
+import (
+ "fmt"
+ "reflect"
+ "testing"
+
+ "github.com/stretchr/testify/require"
+
+ "github.com/algorand/go-algorand/config"
+ "github.com/algorand/go-algorand/data/basics"
+ "github.com/algorand/go-algorand/data/transactions"
+ "github.com/algorand/go-algorand/data/transactions/logic"
+ "github.com/algorand/go-algorand/data/txntest"
+ "github.com/algorand/go-algorand/protocol"
+ "github.com/algorand/go-algorand/test/partitiontest"
+)
+
+func TestNewAppEvalParams(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ params := []config.ConsensusParams{
+ {Application: true, MaxAppProgramCost: 700},
+ config.Consensus[protocol.ConsensusV29],
+ config.Consensus[protocol.ConsensusFuture],
+ }
+
+ // Create some sample transactions. The main reason this a blackbox test
+ // (_test package) is to have access to txntest.
+ payment := txntest.Txn{
+ Type: protocol.PaymentTx,
+ Sender: basics.Address{1, 2, 3, 4},
+ Receiver: basics.Address{4, 3, 2, 1},
+ Amount: 100,
+ }.SignedTxnWithAD()
+
+ appcall1 := txntest.Txn{
+ Type: protocol.ApplicationCallTx,
+ Sender: basics.Address{1, 2, 3, 4},
+ ApplicationID: basics.AppIndex(1),
+ }.SignedTxnWithAD()
+
+ appcall2 := appcall1
+ appcall2.Txn.ApplicationID = basics.AppIndex(2)
+
+ type evalTestCase struct {
+ group []transactions.SignedTxnWithAD
+ numAppCalls int
+ }
+
+ // Create some groups with these transactions
+ cases := []evalTestCase{
+ {[]transactions.SignedTxnWithAD{payment}, 0},
+ {[]transactions.SignedTxnWithAD{appcall1}, 1},
+ {[]transactions.SignedTxnWithAD{payment, payment}, 0},
+ {[]transactions.SignedTxnWithAD{appcall1, payment}, 1},
+ {[]transactions.SignedTxnWithAD{payment, appcall1}, 1},
+ {[]transactions.SignedTxnWithAD{appcall1, appcall2}, 2},
+ {[]transactions.SignedTxnWithAD{appcall1, appcall2, appcall1}, 3},
+ {[]transactions.SignedTxnWithAD{payment, appcall1, payment}, 1},
+ {[]transactions.SignedTxnWithAD{appcall1, payment, appcall2}, 2},
+ }
+
+ for i, param := range params {
+ for j, testCase := range cases {
+ t.Run(fmt.Sprintf("i=%d,j=%d", i, j), func(t *testing.T) {
+ ep := logic.NewEvalParams(testCase.group, &param, nil)
+ require.NotNil(t, ep)
+ require.Equal(t, ep.TxnGroup, testCase.group)
+ require.Equal(t, *ep.Proto, param)
+ if reflect.DeepEqual(param, config.Consensus[protocol.ConsensusV29]) {
+ require.Nil(t, ep.PooledApplicationBudget)
+ } else if reflect.DeepEqual(param, config.Consensus[protocol.ConsensusFuture]) {
+ require.Equal(t, *ep.PooledApplicationBudget, uint64(param.MaxAppProgramCost*testCase.numAppCalls))
+ }
+ })
+ }
+ }
+}
diff --git a/data/transactions/logic/debugger.go b/data/transactions/logic/debugger.go
index 5ec924e2a..25f12fc73 100644
--- a/data/transactions/logic/debugger.go
+++ b/data/transactions/logic/debugger.go
@@ -60,13 +60,13 @@ type PCOffset struct {
// to json and send to tealdbg
type DebugState struct {
// fields set once on Register
- ExecID string `codec:"execid"`
- Disassembly string `codec:"disasm"`
- PCOffset []PCOffset `codec:"pctooffset"`
- TxnGroup []transactions.SignedTxn `codec:"txngroup"`
- GroupIndex int `codec:"gindex"`
- Proto *config.ConsensusParams `codec:"proto"`
- Globals []basics.TealValue `codec:"globals"`
+ ExecID string `codec:"execid"`
+ Disassembly string `codec:"disasm"`
+ PCOffset []PCOffset `codec:"pctooffset"`
+ TxnGroup []transactions.SignedTxnWithAD `codec:"txngroup"`
+ GroupIndex int `codec:"gindex"`
+ Proto *config.ConsensusParams `codec:"proto"`
+ Globals []basics.TealValue `codec:"globals"`
// fields updated every step
PC int `codec:"pc"`
@@ -105,6 +105,10 @@ func makeDebugState(cx *EvalContext) DebugState {
globals := make([]basics.TealValue, len(globalFieldSpecs))
for _, fs := range globalFieldSpecs {
+ // Don't try to grab app only fields when evaluating a signature
+ if (cx.runModeFlags&runModeSignature) != 0 && fs.mode == runModeApplication {
+ continue
+ }
sv, err := cx.globalFieldToValue(fs)
if err != nil {
sv = stackValue{Bytes: []byte(err.Error())}
@@ -113,15 +117,8 @@ func makeDebugState(cx *EvalContext) DebugState {
}
ds.Globals = globals
- // pre-allocate state maps
if (cx.runModeFlags & runModeApplication) != 0 {
- ds.EvalDelta, err = cx.Ledger.GetDelta(&cx.Txn.Txn)
- if err != nil {
- sv := stackValue{Bytes: []byte(err.Error())}
- tv := stackValueToTealValue(&sv)
- vd := tv.ToValueDelta()
- ds.EvalDelta.GlobalDelta = basics.StateDelta{"error": vd}
- }
+ ds.EvalDelta = cx.Txn.EvalDelta
}
return ds
@@ -218,14 +215,7 @@ func (cx *EvalContext) refreshDebugState() *DebugState {
ds.Scratch = scratch
if (cx.runModeFlags & runModeApplication) != 0 {
- var err error
- ds.EvalDelta, err = cx.Ledger.GetDelta(&cx.Txn.Txn)
- if err != nil {
- sv := stackValue{Bytes: []byte(err.Error())}
- tv := stackValueToTealValue(&sv)
- vd := tv.ToValueDelta()
- ds.EvalDelta.GlobalDelta = basics.StateDelta{"error": vd}
- }
+ ds.EvalDelta = cx.Txn.EvalDelta
}
return ds
diff --git a/data/transactions/logic/debugger_test.go b/data/transactions/logic/debugger_test.go
index fa9bb4065..80e4a639a 100644
--- a/data/transactions/logic/debugger_test.go
+++ b/data/transactions/logic/debugger_test.go
@@ -58,7 +58,7 @@ bytec_2
!=
bytec_3
bytec 4
-==
+!=
&&
&&
`
@@ -68,27 +68,20 @@ func TestWebDebuggerManual(t *testing.T) {
debugURL := os.Getenv("TEAL_DEBUGGER_URL")
if len(debugURL) == 0 {
- return
+ t.Skip("this must be run manually")
}
- txn := makeSampleTxn()
- txgroup := makeSampleTxnGroup(txn)
- txn.Lsig.Args = [][]byte{
- txn.Txn.Sender[:],
- txn.Txn.Receiver[:],
- txn.Txn.CloseRemainderTo[:],
- txn.Txn.VotePK[:],
- txn.Txn.SelectionPK[:],
- txn.Txn.Note,
+ ep, tx, _ := makeSampleEnv()
+ ep.TxnGroup[0].Lsig.Args = [][]byte{
+ tx.Sender[:],
+ tx.Receiver[:],
+ tx.CloseRemainderTo[:],
+ tx.VotePK[:],
+ tx.SelectionPK[:],
+ tx.Note,
}
-
- ops, err := AssembleString(testProgram)
- require.NoError(t, err)
- ep := defaultEvalParams(nil, &txn)
- ep.TxnGroup = txgroup
ep.Debugger = &WebDebuggerHook{URL: debugURL}
- _, err = Eval(ops.Program, ep)
- require.NoError(t, err)
+ testLogic(t, testProgram, AssemblerMaxVersion, ep)
}
type testDbgHook struct {
@@ -120,17 +113,14 @@ func TestDebuggerHook(t *testing.T) {
partitiontest.PartitionTest(t)
testDbg := testDbgHook{}
- ops, err := AssembleString(testProgram)
- require.NoError(t, err)
- ep := defaultEvalParams(nil, nil)
+ ep := defaultEvalParams(nil)
ep.Debugger = &testDbg
- _, err = Eval(ops.Program, ep)
- require.NoError(t, err)
+ testLogic(t, testProgram, AssemblerMaxVersion, ep)
require.Equal(t, 1, testDbg.register)
require.Equal(t, 1, testDbg.complete)
require.Greater(t, testDbg.update, 1)
- require.Equal(t, 1, len(testDbg.state.Stack))
+ require.Len(t, testDbg.state.Stack, 1)
}
func TestLineToPC(t *testing.T) {
diff --git a/data/transactions/logic/doc.go b/data/transactions/logic/doc.go
index 5149f4420..4b6972198 100644
--- a/data/transactions/logic/doc.go
+++ b/data/transactions/logic/doc.go
@@ -17,21 +17,19 @@
package logic
import (
- "fmt"
-
"github.com/algorand/go-algorand/protocol"
)
// short description of every op
var opDocByName = map[string]string{
- "err": "Error. Fail immediately. This is primarily a fencepost against accidental zero bytes getting compiled into programs.",
- "sha256": "SHA256 hash of value X, yields [32]byte",
- "keccak256": "Keccak256 hash of value X, yields [32]byte",
- "sha512_256": "SHA512_256 hash of value X, yields [32]byte",
+ "err": "Fail immediately.",
+ "sha256": "SHA256 hash of value A, yields [32]byte",
+ "keccak256": "Keccak256 hash of value A, yields [32]byte",
+ "sha512_256": "SHA512_256 hash of value A, yields [32]byte",
"ed25519verify": "for (data A, signature B, pubkey C) verify the signature of (\"ProgData\" || program_hash || data) against the pubkey => {0 or 1}",
"ecdsa_verify": "for (data A, signature B, C and pubkey D, E) verify the signature of the data against the pubkey => {0 or 1}",
- "ecdsa_pk_decompress": "decompress pubkey A into components X, Y => [*... stack*, X, Y]",
- "ecdsa_pk_recover": "for (data A, recovery id B, signature C, D) recover a public key => [*... stack*, X, Y]",
+ "ecdsa_pk_decompress": "decompress pubkey A into components X, Y",
+ "ecdsa_pk_recover": "for (data A, recovery id B, signature C, D) recover a public key",
"+": "A plus B. Fail on overflow.",
"-": "A minus B. Fail if B > A.",
@@ -45,135 +43,141 @@ var opDocByName = map[string]string{
"||": "A is not zero or B is not zero => {0 or 1}",
"==": "A is equal to B => {0 or 1}",
"!=": "A is not equal to B => {0 or 1}",
- "!": "X == 0 yields 1; else 0",
- "len": "yields length of byte value X",
- "itob": "converts uint64 X to big endian bytes",
- "btoi": "converts bytes X as big endian to uint64",
+ "!": "A == 0 yields 1; else 0",
+ "len": "yields length of byte value A",
+ "itob": "converts uint64 A to big endian bytes",
+ "btoi": "converts bytes A as big endian to uint64",
"%": "A modulo B. Fail if B == 0.",
"|": "A bitwise-or B",
"&": "A bitwise-and B",
"^": "A bitwise-xor B",
- "~": "bitwise invert value X",
+ "~": "bitwise invert value A",
"shl": "A times 2^B, modulo 2^64",
"shr": "A divided by 2^B",
- "sqrt": "The largest integer B such that B^2 <= X",
- "bitlen": "The highest set bit in X. If X is a byte-array, it is interpreted as a big-endian unsigned integer. bitlen of 0 is 0, bitlen of 8 is 4",
+ "sqrt": "The largest integer I such that I^2 <= A",
+ "bitlen": "The highest set bit in A. If A is a byte-array, it is interpreted as a big-endian unsigned integer. bitlen of 0 is 0, bitlen of 8 is 4",
"exp": "A raised to the Bth power. Fail if A == B == 0 and on overflow",
- "expw": "A raised to the Bth power as a 128-bit long result as low (top) and high uint64 values on the stack. Fail if A == B == 0 or if the results exceeds 2^128-1",
- "mulw": "A times B out to 128-bit long result as low (top) and high uint64 values on the stack",
- "addw": "A plus B out to 128-bit long result as sum (top) and carry-bit uint64 values on the stack",
- "divmodw": "Pop four uint64 values. The deepest two are interpreted as a uint128 dividend (deepest value is high word), the top two are interpreted as a uint128 divisor. Four uint64 values are pushed to the stack. The deepest two are the quotient (deeper value is the high uint64). The top two are the remainder, low bits on top.",
+ "expw": "A raised to the Bth power as a 128-bit result in two uint64s. X is the high 64 bits, Y is the low. Fail if A == B == 0 or if the results exceeds 2^128-1",
+ "mulw": "A times B as a 128-bit result in two uint64s. X is the high 64 bits, Y is the low",
+ "addw": "A plus B as a 128-bit result. X is the carry-bit, Y is the low-order 64 bits.",
+ "divmodw": "W,X = (A,B / C,D); Y,Z = (A,B modulo C,D)",
"intcblock": "prepare block of uint64 constants for use by intc",
- "intc": "push Ith constant from intcblock to stack",
- "intc_0": "push constant 0 from intcblock to stack",
- "intc_1": "push constant 1 from intcblock to stack",
- "intc_2": "push constant 2 from intcblock to stack",
- "intc_3": "push constant 3 from intcblock to stack",
- "pushint": "push immediate UINT to the stack as an integer",
+ "intc": "Ith constant from intcblock",
+ "intc_0": "constant 0 from intcblock",
+ "intc_1": "constant 1 from intcblock",
+ "intc_2": "constant 2 from intcblock",
+ "intc_3": "constant 3 from intcblock",
+ "pushint": "immediate UINT",
"bytecblock": "prepare block of byte-array constants for use by bytec",
- "bytec": "push Ith constant from bytecblock to stack",
- "bytec_0": "push constant 0 from bytecblock to stack",
- "bytec_1": "push constant 1 from bytecblock to stack",
- "bytec_2": "push constant 2 from bytecblock to stack",
- "bytec_3": "push constant 3 from bytecblock to stack",
- "pushbytes": "push the following program bytes to the stack",
-
- "bzero": "push a byte-array of length X, containing all zero bytes",
- "arg": "push Nth LogicSig argument to stack",
- "arg_0": "push LogicSig argument 0 to stack",
- "arg_1": "push LogicSig argument 1 to stack",
- "arg_2": "push LogicSig argument 2 to stack",
- "arg_3": "push LogicSig argument 3 to stack",
- "args": "push Xth LogicSig argument to stack",
- "txn": "push field F of current transaction to stack",
- "gtxn": "push field F of the Tth transaction in the current group",
- "gtxns": "push field F of the Xth transaction in the current group",
- "txna": "push Ith value of the array field F of the current transaction",
- "gtxna": "push Ith value of the array field F from the Tth transaction in the current group",
- "gtxnsa": "push Ith value of the array field F from the Xth transaction in the current group",
- "txnas": "push Xth value of the array field F of the current transaction",
- "gtxnas": "push Xth value of the array field F from the Tth transaction in the current group",
- "gtxnsas": "pop an index A and an index B. push Bth value of the array field F from the Ath transaction in the current group",
- "itxn": "push field F of the last inner transaction to stack",
- "itxna": "push Ith value of the array field F of the last inner transaction to stack",
-
- "global": "push value from globals to stack",
- "load": "copy a value from scratch space to the stack. All scratch spaces are 0 at program start.",
- "store": "pop value X. store X to the Ith scratch space",
- "loads": "copy a value from the Xth scratch space to the stack. All scratch spaces are 0 at program start.",
- "stores": "pop indexes A and B. store B to the Ath scratch space",
- "gload": "push Ith scratch space index of the Tth transaction in the current group",
- "gloads": "push Ith scratch space index of the Xth transaction in the current group",
- "gaid": "push the ID of the asset or application created in the Tth transaction of the current group",
- "gaids": "push the ID of the asset or application created in the Xth transaction of the current group",
-
- "bnz": "branch to TARGET if value X is not zero",
- "bz": "branch to TARGET if value X is zero",
+ "bytec": "Ith constant from bytecblock",
+ "bytec_0": "constant 0 from bytecblock",
+ "bytec_1": "constant 1 from bytecblock",
+ "bytec_2": "constant 2 from bytecblock",
+ "bytec_3": "constant 3 from bytecblock",
+ "pushbytes": "immediate BYTES",
+
+ "bzero": "zero filled byte-array of length A",
+ "arg": "Nth LogicSig argument",
+ "arg_0": "LogicSig argument 0",
+ "arg_1": "LogicSig argument 1",
+ "arg_2": "LogicSig argument 2",
+ "arg_3": "LogicSig argument 3",
+ "args": "Ath LogicSig argument",
+ "txn": "field F of current transaction",
+ "gtxn": "field F of the Tth transaction in the current group",
+ "gtxns": "field F of the Ath transaction in the current group",
+ "txna": "Ith value of the array field F of the current transaction",
+ "gtxna": "Ith value of the array field F from the Tth transaction in the current group",
+ "gtxnsa": "Ith value of the array field F from the Ath transaction in the current group",
+ "txnas": "Ath value of the array field F of the current transaction",
+ "gtxnas": "Ath value of the array field F from the Tth transaction in the current group",
+ "gtxnsas": "Bth value of the array field F from the Ath transaction in the current group",
+ "itxn": "field F of the last inner transaction",
+ "itxna": "Ith value of the array field F of the last inner transaction",
+ "gitxn": "field F of the Tth transaction in the last inner group submitted",
+ "gitxna": "Ith value of the array field F from the Tth transaction in the last inner group submitted",
+
+ "global": "global field F",
+ "load": "Ith scratch space value. All scratch spaces are 0 at program start.",
+ "store": "store A to the Ith scratch space",
+ "loads": "Ath scratch space value. All scratch spaces are 0 at program start.",
+ "stores": "store B to the Ath scratch space",
+ "gload": "Ith scratch space value of the Tth transaction in the current group",
+ "gloads": "Ith scratch space value of the Ath transaction in the current group",
+ "gloadss": "Bth scratch space value of the Ath transaction in the current group",
+ "gaid": "ID of the asset or application created in the Tth transaction of the current group",
+ "gaids": "ID of the asset or application created in the Ath transaction of the current group",
+
+ "bnz": "branch to TARGET if value A is not zero",
+ "bz": "branch to TARGET if value A is zero",
"b": "branch unconditionally to TARGET",
- "return": "use last value on stack as success value; end",
- "pop": "discard value X from stack",
- "dup": "duplicate last value on stack",
- "dup2": "duplicate two last values on stack: A, B -> A, B, A, B",
- "dig": "push the Nth value from the top of the stack. dig 0 is equivalent to dup",
+ "return": "use A as success value; end",
+ "pop": "discard A",
+ "dup": "duplicate A",
+ "dup2": "duplicate A and B",
+ "dig": "Nth value from the top of the stack. dig 0 is equivalent to dup",
"cover": "remove top of stack, and place it deeper in the stack such that N elements are above it. Fails if stack depth <= N.",
"uncover": "remove the value at depth N in the stack and shift above items down so the Nth deep value is on top of the stack. Fails if stack depth <= N.",
- "swap": "swaps two last values on stack: A, B -> B, A",
- "select": "selects one of two values based on top-of-stack: A, B, C -> (if C != 0 then B else A)",
-
- "concat": "pop two byte-arrays A and B and join them, push the result",
- "substring": "pop a byte-array A. For immediate values in 0..255 S and E: extract a range of bytes from A starting at S up to but not including E, push the substring result. If E < S, or either is larger than the array length, the program fails",
- "substring3": "pop a byte-array A and two integers B and C. Extract a range of bytes from A starting at B up to but not including C, push the substring result. If C < B, or either is larger than the array length, the program fails",
- "getbit": "pop a target A (integer or byte-array), and index B. Push the Bth bit of A.",
- "setbit": "pop a target A, index B, and bit C. Set the Bth bit of A to C, and push the result",
- "getbyte": "pop a byte-array A and integer B. Extract the Bth byte of A and push it as an integer",
- "setbyte": "pop a byte-array A, integer B, and small integer C (between 0..255). Set the Bth byte of A to C, and push the result",
- "extract": "pop a byte-array A. For immediate values in 0..255 S and L: extract a range of bytes from A starting at S up to but not including S+L, push the substring result. If L is 0, then extract to the end of the string. If S or S+L is larger than the array length, the program fails",
- "extract3": "pop a byte-array A and two integers B and C. Extract a range of bytes from A starting at B up to but not including B+C, push the substring result. If B+C is larger than the array length, the program fails",
- "extract_uint16": "pop a byte-array A and integer B. Extract a range of bytes from A starting at B up to but not including B+2, convert bytes as big endian and push the uint64 result. If B+2 is larger than the array length, the program fails",
- "extract_uint32": "pop a byte-array A and integer B. Extract a range of bytes from A starting at B up to but not including B+4, convert bytes as big endian and push the uint64 result. If B+4 is larger than the array length, the program fails",
- "extract_uint64": "pop a byte-array A and integer B. Extract a range of bytes from A starting at B up to but not including B+8, convert bytes as big endian and push the uint64 result. If B+8 is larger than the array length, the program fails",
- "base64_decode": "decode X which was base64-encoded using _encoding_ E. Fail if X is not base64 encoded with encoding E",
+ "swap": "swaps A and B on stack",
+ "select": "selects one of two values based on top-of-stack: B if C != 0, else A",
+
+ "concat": "join A and B",
+ "substring": "A range of bytes from A starting at S up to but not including E. If E < S, or either is larger than the array length, the program fails",
+ "substring3": "A range of bytes from A starting at B up to but not including C. If C < B, or either is larger than the array length, the program fails",
+ "getbit": "Bth bit of (byte-array or integer) A.",
+ "setbit": "Copy of (byte-array or integer) A, with the Bth bit set to (0 or 1) C",
+ "getbyte": "Bth byte of A, as an integer",
+ "setbyte": "Copy of A with the Bth byte set to small integer (between 0..255) C",
+ "extract": "A range of bytes from A starting at S up to but not including S+L. If L is 0, then extract to the end of the string. If S or S+L is larger than the array length, the program fails",
+ "extract3": "A range of bytes from A starting at B up to but not including B+C. If B+C is larger than the array length, the program fails",
+ "extract_uint16": "A uint16 formed from a range of big-endian bytes from A starting at B up to but not including B+2. If B+2 is larger than the array length, the program fails",
+ "extract_uint32": "A uint32 formed from a range of big-endian bytes from A starting at B up to but not including B+4. If B+4 is larger than the array length, the program fails",
+ "extract_uint64": "A uint64 formed from a range of big-endian bytes from A starting at B up to but not including B+8. If B+8 is larger than the array length, the program fails",
+ "base64_decode": "decode A which was base64-encoded using _encoding_ E. Fail if A is not base64 encoded with encoding E",
"balance": "get balance for account A, in microalgos. The balance is observed after the effects of previous transactions in the group, and after the fee for the current transaction is deducted.",
"min_balance": "get minimum required balance for account A, in microalgos. Required balance is affected by [ASA](https://developer.algorand.org/docs/features/asa/#assets-overview) and [App](https://developer.algorand.org/docs/features/asc1/stateful/#minimum-balance-requirement-for-a-smart-contract) usage. When creating or opting into an app, the minimum balance grows before the app code runs, therefore the increase is visible there. When deleting or closing out, the minimum balance decreases after the app executes.",
- "app_opted_in": "check if account A opted in for the application B => {0 or 1}",
- "app_local_get": "read from account A from local state of the current application key B => value",
- "app_local_get_ex": "read from account A from local state of the application B key C => [*... stack*, value, 0 or 1]",
- "app_global_get": "read key A from global state of a current application => value",
- "app_global_get_ex": "read from application A global state key B => [*... stack*, value, 0 or 1]",
- "app_local_put": "write to account specified by A to local state of a current application key B with value C",
- "app_global_put": "write key A and value B to global state of the current application",
- "app_local_del": "delete from account A local state key B of the current application",
- "app_global_del": "delete key A from a global state of the current application",
- "asset_holding_get": "read from account A and asset B holding field X (imm arg) => {0 or 1 (top), value}",
- "asset_params_get": "read from asset A params field X (imm arg) => {0 or 1 (top), value}",
- "app_params_get": "read from app A params field X (imm arg) => {0 or 1 (top), value}",
- "assert": "immediately fail unless value X is a non-zero number",
+ "app_opted_in": "1 if account A is opted in to application B, else 0",
+ "app_local_get": "local state of the key B in the current application in account A",
+ "app_local_get_ex": "X is the local state of application B, key C in account A. Y is 1 if key existed, else 0",
+ "app_global_get": "global state of the key A in the current application",
+ "app_global_get_ex": "X is the global state of application A, key B. Y is 1 if key existed, else 0",
+ "app_local_put": "write C to key B in account A's local state of the current application",
+ "app_global_put": "write B to key A in the global state of the current application",
+ "app_local_del": "delete key B from account A's local state of the current application",
+ "app_global_del": "delete key A from the global state of the current application",
+ "asset_holding_get": "X is field F from account A's holding of asset B. Y is 1 if A is opted into B, else 0",
+ "asset_params_get": "X is field F from asset A. Y is 1 if A exists, else 0",
+ "app_params_get": "X is field F from app A. Y is 1 if A exists, else 0",
+ "acct_params_get": "X is field F from account A. Y is 1 if A owns positive algos, else 0",
+ "assert": "immediately fail unless X is a non-zero number",
"callsub": "branch unconditionally to TARGET, saving the next instruction on the call stack",
"retsub": "pop the top instruction from the call stack and branch to it",
- "b+": "A plus B, where A and B are byte-arrays interpreted as big-endian unsigned integers",
- "b-": "A minus B, where A and B are byte-arrays interpreted as big-endian unsigned integers. Fail on underflow.",
- "b/": "A divided by B (truncated division), where A and B are byte-arrays interpreted as big-endian unsigned integers. Fail if B is zero.",
- "b*": "A times B, where A and B are byte-arrays interpreted as big-endian unsigned integers.",
- "b<": "A is less than B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}",
- "b>": "A is greater than B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}",
- "b<=": "A is less than or equal to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}",
- "b>=": "A is greater than or equal to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}",
- "b==": "A is equals to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}",
- "b!=": "A is not equal to B, where A and B are byte-arrays interpreted as big-endian unsigned integers => { 0 or 1}",
- "b%": "A modulo B, where A and B are byte-arrays interpreted as big-endian unsigned integers. Fail if B is zero.",
- "b|": "A bitwise-or B, where A and B are byte-arrays, zero-left extended to the greater of their lengths",
- "b&": "A bitwise-and B, where A and B are byte-arrays, zero-left extended to the greater of their lengths",
- "b^": "A bitwise-xor B, where A and B are byte-arrays, zero-left extended to the greater of their lengths",
- "b~": "X with all bits inverted",
-
- "log": "write bytes to log state of the current application",
+ "b+": "A plus B. A and B are interpreted as big-endian unsigned integers",
+ "b-": "A minus B. A and B are interpreted as big-endian unsigned integers. Fail on underflow.",
+ "b/": "A divided by B (truncated division). A and B are interpreted as big-endian unsigned integers. Fail if B is zero.",
+ "b*": "A times B. A and B are interpreted as big-endian unsigned integers.",
+ "b<": "1 if A is less than B, else 0. A and B are interpreted as big-endian unsigned integers",
+ "b>": "1 if A is greater than B, else 0. A and B are interpreted as big-endian unsigned integers",
+ "b<=": "1 if A is less than or equal to B, else 0. A and B are interpreted as big-endian unsigned integers",
+ "b>=": "1 if A is greater than or equal to B, else 0. A and B are interpreted as big-endian unsigned integers",
+ "b==": "1 if A is equal to B, else 0. A and B are interpreted as big-endian unsigned integers",
+ "b!=": "0 if A is equal to B, else 1. A and B are interpreted as big-endian unsigned integers",
+ "b%": "A modulo B. A and B are interpreted as big-endian unsigned integers. Fail if B is zero.",
+ "b|": "A bitwise-or B. A and B are zero-left extended to the greater of their lengths",
+ "b&": "A bitwise-and B. A and B are zero-left extended to the greater of their lengths",
+ "b^": "A bitwise-xor B. A and B are zero-left extended to the greater of their lengths",
+ "b~": "A with all bits inverted",
+
+ "bsqrt": "The largest integer I such that I^2 <= A. A and I are interpreted as big-endian unsigned integers",
+
+ "log": "write A to log state of the current application",
"itxn_begin": "begin preparation of a new inner transaction in a new transaction group",
"itxn_next": "begin preparation of a new inner transaction in the same transaction group",
- "itxn_field": "set field F of the current inner transaction to X",
- "itxn_submit": "execute the current inner transaction group. Fail if executing this group would exceed 16 total inner transactions, or if any transaction in the group fails.",
+ "itxn_field": "set field F of the current inner transaction to A",
+ "itxn_submit": "execute the current inner transaction group. Fail if executing this group would exceed the inner transaction limit, or if any transaction in the group fails.",
}
// OpDoc returns a description of the op
@@ -222,10 +226,13 @@ var opcodeImmediateNotes = map[string]string{
"asset_holding_get": "{uint8 asset holding field index}",
"asset_params_get": "{uint8 asset params field index}",
"app_params_get": "{uint8 app params field index}",
+ "acct_params_get": "{uint8 account params field index}",
"itxn_field": "{uint8 transaction field index}",
"itxn": "{uint8 transaction field index}",
"itxna": "{uint8 transaction field index} {uint8 transaction field array index}",
+ "gitxn": "{uint8 transaction group index} {uint8 transaction field index}",
+ "gitxna": "{uint8 transaction group index} {uint8 transaction field index} {uint8 transaction field array index}",
"ecdsa_verify": "{uint8 curve index}",
"ecdsa_pk_decompress": "{uint8 curve index}",
@@ -239,6 +246,21 @@ func OpImmediateNote(opName string) string {
return opcodeImmediateNotes[opName]
}
+var opcodeSpecialStackEffects = map[string]string{
+ "dup": "..., A &rarr; ..., A, A",
+ "dup2": "..., A, B &rarr; ..., A, B, A, B",
+ "dig": "..., A, [N items] &rarr; ..., A, [N items], A",
+ "swap": "..., A, B &rarr; ..., B, A",
+ "select": "..., A, B, C &rarr; ..., A or B",
+ "cover": "..., [N items], A &rarr; ..., A, [N items]",
+ "uncover": "..., A, [N items] &rarr; ..., [N items], A",
+}
+
+// OpStackEffects returns a "stack pattern" for opcodes that do not have a derivable effect
+func OpStackEffects(opName string) string {
+ return opcodeSpecialStackEffects[opName]
+}
+
// further documentation on the function of the opcode
var opDocExtras = map[string]string{
"ed25519verify": "The 32 byte public key is the last element on the stack, preceded by the 64 byte signature at the second-to-last element on the stack, preceded by the data which was signed at the third-to-last element on the stack.",
@@ -256,37 +278,39 @@ var opDocExtras = map[string]string{
"+": "Overflow is an error condition which halts execution and fails the transaction. Full precision is available from `addw`.",
"/": "`divmodw` is available to divide the two-element values produced by `mulw` and `addw`.",
"bitlen": "bitlen interprets arrays as big-endian integers, unlike setbit/getbit",
+ "divmodw": "The notation J,K indicates that two uint64 values J and K are interpreted as a uint128 value, with J as the high uint64 and K the low.",
"txn": "FirstValidTime causes the program to fail. The field is reserved for future use.",
"gtxn": "for notes on transaction fields available, see `txn`. If this transaction is _i_ in the group, `gtxn i field` is equivalent to `txn field`.",
"gtxns": "for notes on transaction fields available, see `txn`. If top of stack is _i_, `gtxns field` is equivalent to `gtxn _i_ field`. gtxns exists so that _i_ can be calculated, often based on the index of the current transaction.",
"gload": "`gload` fails unless the requested transaction is an ApplicationCall and T < GroupIndex.",
- "gloads": "`gloads` fails unless the requested transaction is an ApplicationCall and X < GroupIndex.",
+ "gloads": "`gloads` fails unless the requested transaction is an ApplicationCall and A < GroupIndex.",
"gaid": "`gaid` fails unless the requested transaction created an asset or application and T < GroupIndex.",
- "gaids": "`gaids` fails unless the requested transaction created an asset or application and X < GroupIndex.",
+ "gaids": "`gaids` fails unless the requested transaction created an asset or application and A < GroupIndex.",
"btoi": "`btoi` fails if the input is longer than 8 bytes.",
"concat": "`concat` fails if the result would be greater than 4096 bytes.",
"pushbytes": "pushbytes args are not added to the bytecblock during assembly processes",
"pushint": "pushint args are not added to the intcblock during assembly processes",
"getbit": "see explanation of bit ordering in setbit",
"setbit": "When A is a uint64, index 0 is the least significant bit. Setting bit 3 to 1 on the integer 0 yields 8, or 2^3. When A is a byte array, index 0 is the leftmost bit of the leftmost byte. Setting bits 0 through 11 to 1 in a 4-byte-array of 0s yields the byte array 0xfff00000. Setting bit 3 to 1 on the 1-byte-array 0x00 yields the byte array 0x10.",
- "balance": "params: Before v4, Txn.Accounts offset. Since v4, Txn.Accounts offset or an account address that appears in Txn.Accounts or is Txn.Sender). Return: value.",
- "min_balance": "params: Before v4, Txn.Accounts offset. Since v4, Txn.Accounts offset or an account address that appears in Txn.Accounts or is Txn.Sender). Return: value.",
- "app_opted_in": "params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), application id (or, since v4, a Txn.ForeignApps offset). Return: 1 if opted in and 0 otherwise.",
- "app_local_get": "params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), state key. Return: value. The value is zero (of type uint64) if the key does not exist.",
- "app_local_get_ex": "params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), application id (or, since v4, a Txn.ForeignApps offset), state key. Return: did_exist flag (top of the stack, 1 if the application and key existed and 0 otherwise), value. The value is zero (of type uint64) if the key does not exist.",
- "app_global_get_ex": "params: Txn.ForeignApps offset (or, since v4, an application id that appears in Txn.ForeignApps or is the CurrentApplicationID), state key. Return: did_exist flag (top of the stack, 1 if the application and key existed and 0 otherwise), value. The value is zero (of type uint64) if the key does not exist.",
+ "balance": "params: Txn.Accounts offset (or, since v4, an _available_ account address), _available_ application id (or, since v4, a Txn.ForeignApps offset). Return: value.",
+ "min_balance": "params: Txn.Accounts offset (or, since v4, an _available_ account address), _available_ application id (or, since v4, a Txn.ForeignApps offset). Return: value.",
+ "app_opted_in": "params: Txn.Accounts offset (or, since v4, an _available_ account address), _available_ application id (or, since v4, a Txn.ForeignApps offset). Return: 1 if opted in and 0 otherwise.",
+ "app_local_get": "params: Txn.Accounts offset (or, since v4, an _available_ account address), state key. Return: value. The value is zero (of type uint64) if the key does not exist.",
+ "app_local_get_ex": "params: Txn.Accounts offset (or, since v4, an _available_ account address), _available_ application id (or, since v4, a Txn.ForeignApps offset), state key. Return: did_exist flag (top of the stack, 1 if the application and key existed and 0 otherwise), value. The value is zero (of type uint64) if the key does not exist.",
+ "app_global_get_ex": "params: Txn.ForeignApps offset (or, since v4, an _available_ application id), state key. Return: did_exist flag (top of the stack, 1 if the application and key existed and 0 otherwise), value. The value is zero (of type uint64) if the key does not exist.",
"app_global_get": "params: state key. Return: value. The value is zero (of type uint64) if the key does not exist.",
- "app_local_put": "params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), state key, value.",
- "app_local_del": "params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), state key.\n\nDeleting a key which is already absent has no effect on the application local state. (In particular, it does _not_ cause the program to fail.)",
+ "app_local_put": "params: Txn.Accounts offset (or, since v4, an _available_ account address), state key, value.",
+ "app_local_del": "params: Txn.Accounts offset (or, since v4, an _available_ account address), state key.\n\nDeleting a key which is already absent has no effect on the application local state. (In particular, it does _not_ cause the program to fail.)",
"app_global_del": "params: state key.\n\nDeleting a key which is already absent has no effect on the application global state. (In particular, it does _not_ cause the program to fail.)",
- "asset_holding_get": "params: Txn.Accounts offset (or, since v4, an account address that appears in Txn.Accounts or is Txn.Sender), asset id (or, since v4, a Txn.ForeignAssets offset). Return: did_exist flag (1 if the asset existed and 0 otherwise), value.",
- "asset_params_get": "params: Before v4, Txn.ForeignAssets offset. Since v4, Txn.ForeignAssets offset or an asset id that appears in Txn.ForeignAssets. Return: did_exist flag (1 if the asset existed and 0 otherwise), value.",
- "app_params_get": "params: Txn.ForeignApps offset or an app id that appears in Txn.ForeignApps. Return: did_exist flag (1 if the application existed and 0 otherwise), value.",
+ "asset_holding_get": "params: Txn.Accounts offset (or, since v4, an _available_ address), asset id (or, since v4, a Txn.ForeignAssets offset). Return: did_exist flag (1 if the asset existed and 0 otherwise), value.",
+ "asset_params_get": "params: Txn.ForeignAssets offset (or, since v4, an _available_ asset id. Return: did_exist flag (1 if the asset existed and 0 otherwise), value.",
+ "app_params_get": "params: Txn.ForeignApps offset or an _available_ app id. Return: did_exist flag (1 if the application existed and 0 otherwise), value.",
"log": "`log` fails if called more than MaxLogCalls times in a program, or if the sum of logged bytes exceeds 1024 bytes.",
- "itxn_begin": "`itxn_begin` initializes Sender to the application address; Fee to the minimum allowable, taking into account MinTxnFee and credit from overpaying in earlier transactions; FirstValid/LastValid to the values in the top-level transaction, and all other fields to zero values.",
- "itxn_field": "`itxn_field` fails if X is of the wrong type for F, including a byte array of the wrong size for use as an address when F is an address field. `itxn_field` also fails if X is an account or asset that does not appear in `txn.Accounts` or `txn.ForeignAssets` of the top-level transaction. (Setting addresses in asset creation are exempted from this requirement.)",
+ "itxn_begin": "`itxn_begin` initializes Sender to the application address; Fee to the minimum allowable, taking into account MinTxnFee and credit from overpaying in earlier transactions; FirstValid/LastValid to the values in the invoking transaction, and all other fields to zero or empty values.",
+ "itxn_next": "`itxn_next` initializes the transaction exactly as `itxn_begin` does",
+ "itxn_field": "`itxn_field` fails if A is of the wrong type for F, including a byte array of the wrong size for use as an address when F is an address field. `itxn_field` also fails if A is an account, asset, or app that is not _available_. (Addresses set into asset params of acfg transactions need not be _available_.)",
"itxn_submit": "`itxn_submit` resets the current transaction so that it can not be resubmitted. A new `itxn_begin` is required to prepare another inner transaction.",
- "base64_decode": "Decodes X using the base64 encoding E. Specify the encoding with an immediate arg either as URL and Filename Safe (`URLEncoding`) or Standard (`StdEncoding`). See <a href=\"https://rfc-editor.org/rfc/rfc4648.html#section-4\">RFC 4648</a> (sections 4 and 5). It is assumed that the encoding ends with the exact number of `=` padding characters as required by the RFC. When padding occurs, any unused pad bits in the encoding must be set to zero or the decoding will fail. The special cases of `\\n` and `\\r` are allowed but completely ignored. An error will result when attempting to decode a string with a character that is not in the encoding alphabet or not one of `=`, `\\r`, or `\\n`.",
+ "base64_decode": "Decodes A using the base64 encoding E. Specify the encoding with an immediate arg either as URL and Filename Safe (`URLEncoding`) or Standard (`StdEncoding`). See <a href=\"https://rfc-editor.org/rfc/rfc4648.html#section-4\">RFC 4648</a> (sections 4 and 5). It is assumed that the encoding ends with the exact number of `=` padding characters as required by the RFC. When padding occurs, any unused pad bits in the encoding must be set to zero or the decoding will fail. The special cases of `\\n` and `\\r` are allowed but completely ignored. An error will result when attempting to decode a string with a character that is not in the encoding alphabet or not one of `=`, `\\r`, or `\\n`.",
}
// OpDocExtra returns extra documentation text about an op
@@ -300,12 +324,12 @@ func OpDocExtra(opName string) string {
var OpGroups = map[string][]string{
"Arithmetic": {"sha256", "keccak256", "sha512_256", "ed25519verify", "ecdsa_verify", "ecdsa_pk_recover", "ecdsa_pk_decompress", "+", "-", "/", "*", "<", ">", "<=", ">=", "&&", "||", "shl", "shr", "sqrt", "bitlen", "exp", "==", "!=", "!", "len", "itob", "btoi", "%", "|", "&", "^", "~", "mulw", "addw", "divmodw", "expw", "getbit", "setbit", "getbyte", "setbyte", "concat"},
"Byte Array Manipulation": {"substring", "substring3", "extract", "extract3", "extract_uint16", "extract_uint32", "extract_uint64", "base64_decode"},
- "Byte Array Arithmetic": {"b+", "b-", "b/", "b*", "b<", "b>", "b<=", "b>=", "b==", "b!=", "b%"},
+ "Byte Array Arithmetic": {"b+", "b-", "b/", "b*", "b<", "b>", "b<=", "b>=", "b==", "b!=", "b%", "bsqrt"},
"Byte Array Logic": {"b|", "b&", "b^", "b~"},
- "Loading Values": {"intcblock", "intc", "intc_0", "intc_1", "intc_2", "intc_3", "pushint", "bytecblock", "bytec", "bytec_0", "bytec_1", "bytec_2", "bytec_3", "pushbytes", "bzero", "arg", "arg_0", "arg_1", "arg_2", "arg_3", "args", "txn", "gtxn", "txna", "txnas", "gtxna", "gtxnas", "gtxns", "gtxnsa", "gtxnsas", "global", "load", "loads", "store", "stores", "gload", "gloads", "gaid", "gaids"},
+ "Loading Values": {"intcblock", "intc", "intc_0", "intc_1", "intc_2", "intc_3", "pushint", "bytecblock", "bytec", "bytec_0", "bytec_1", "bytec_2", "bytec_3", "pushbytes", "bzero", "arg", "arg_0", "arg_1", "arg_2", "arg_3", "args", "txn", "gtxn", "txna", "txnas", "gtxna", "gtxnas", "gtxns", "gtxnsa", "gtxnsas", "global", "load", "loads", "store", "stores", "gload", "gloads", "gloadss", "gaid", "gaids"},
"Flow Control": {"err", "bnz", "bz", "b", "return", "pop", "dup", "dup2", "dig", "cover", "uncover", "swap", "select", "assert", "callsub", "retsub"},
- "State Access": {"balance", "min_balance", "app_opted_in", "app_local_get", "app_local_get_ex", "app_global_get", "app_global_get_ex", "app_local_put", "app_global_put", "app_local_del", "app_global_del", "asset_holding_get", "asset_params_get", "app_params_get", "log"},
- "Inner Transactions": {"itxn_begin", "itxn_next", "itxn_field", "itxn_submit", "itxn", "itxna"},
+ "State Access": {"balance", "min_balance", "app_opted_in", "app_local_get", "app_local_get_ex", "app_global_get", "app_global_get_ex", "app_local_put", "app_global_put", "app_local_del", "app_global_del", "asset_holding_get", "asset_params_get", "app_params_get", "acct_params_get", "log"},
+ "Inner Transactions": {"itxn_begin", "itxn_next", "itxn_field", "itxn_submit", "itxn", "itxna", "gitxn", "gitxna"},
}
// OpCost indicates the cost of an operation over the range of
@@ -376,7 +400,7 @@ var txnFieldDocs = map[string]string{
"Type": "Transaction type as bytes",
"TypeEnum": "See table below",
"Sender": "32 byte address",
- "Fee": "micro-Algos",
+ "Fee": "microalgos",
"FirstValid": "round number",
"FirstValidTime": "Causes program to fail; reserved for future use",
"LastValid": "round number",
@@ -388,7 +412,7 @@ var txnFieldDocs = map[string]string{
"TxID": "The computed ID for this transaction. 32 bytes.",
"Receiver": "32 byte address",
- "Amount": "micro-Algos",
+ "Amount": "microalgos",
"CloseRemainderTo": "32 byte address",
"VotePK": "32 byte address",
@@ -439,62 +463,46 @@ var txnFieldDocs = map[string]string{
"FreezeAssetAccount": "32 byte address of the account whose asset slot is being frozen or un-frozen",
"FreezeAssetFrozen": "The new frozen value, 0 or 1",
- "Logs": "Log messages emitted by an application call (itxn only)",
- "NumLogs": "Number of Logs (itxn only)",
- "CreatedAssetID": "Asset ID allocated by the creation of an ASA (itxn only)",
- "CreatedApplicationID": "ApplicationID allocated by the creation of an application (itxn only)",
-}
-
-// TxnFieldDocs are notes on fields available by `txn` and `gtxn` with extra versioning info if any
-func TxnFieldDocs() map[string]string {
- return fieldsDocWithExtra(txnFieldDocs, txnFieldSpecByName)
+ "Logs": "Log messages emitted by an application call (`itxn` only until v6)",
+ "NumLogs": "Number of Logs (`itxn` only until v6)",
+ "CreatedAssetID": "Asset ID allocated by the creation of an ASA (`itxn` only until v6)",
+ "CreatedApplicationID": "ApplicationID allocated by the creation of an application (`itxn` only until v6)",
}
var globalFieldDocs = map[string]string{
- "MinTxnFee": "micro Algos",
- "MinBalance": "micro Algos",
+ "MinTxnFee": "microalgos",
+ "MinBalance": "microalgos",
"MaxTxnLife": "rounds",
"ZeroAddress": "32 byte address of all zero bytes",
"GroupSize": "Number of transactions in this atomic transaction group. At least 1",
- "LogicSigVersion": "Maximum supported TEAL version",
+ "LogicSigVersion": "Maximum supported version",
"Round": "Current round number",
"LatestTimestamp": "Last confirmed block UNIX timestamp. Fails if negative",
- "CurrentApplicationID": "ID of current application executing. Fails in LogicSigs",
- "CreatorAddress": "Address of the creator of the current application. Fails if no such application is executing",
- "CurrentApplicationAddress": "Address that the current application controls. Fails in LogicSigs",
+ "CurrentApplicationID": "ID of current application executing",
+ "CreatorAddress": "Address of the creator of the current application",
+ "CurrentApplicationAddress": "Address that the current application controls",
"GroupID": "ID of the transaction group. 32 zero bytes if the transaction is not part of a group.",
+ "OpcodeBudget": "The remaining cost that can be spent by opcodes in this program.",
+ "CallerApplicationID": "The application ID of the application that called this application. 0 if this application is at the top-level.",
+ "CallerApplicationAddress": "The application address of the application that called this application. ZeroAddress if this application is at the top-level.",
}
-// GlobalFieldDocs are notes on fields available in `global` with extra versioning info if any
-func GlobalFieldDocs() map[string]string {
- return fieldsDocWithExtra(globalFieldDocs, globalFieldSpecByName)
-}
-
-type extractor interface {
- getExtraFor(string) string
-}
-
-func fieldsDocWithExtra(source map[string]string, ex extractor) map[string]string {
- result := make(map[string]string, len(source))
- for name, doc := range source {
- if extra := ex.getExtraFor(name); len(extra) > 0 {
- if len(doc) == 0 {
- doc = extra
- } else {
- sep := ". "
- if doc[len(doc)-1] == '.' {
- sep = " "
- }
- doc = fmt.Sprintf("%s%s%s", doc, sep, extra)
- }
- }
- result[name] = doc
+func addExtra(original string, extra string) string {
+ if len(original) == 0 {
+ return extra
}
- return result
+ if len(extra) == 0 {
+ return original
+ }
+ sep := ". "
+ if original[len(original)-1] == '.' {
+ sep = " "
+ }
+ return original + sep + extra
}
// AssetHoldingFieldDocs are notes on fields available in `asset_holding_get`
-var AssetHoldingFieldDocs = map[string]string{
+var assetHoldingFieldDocs = map[string]string{
"AssetBalance": "Amount of the asset unit held by this account",
"AssetFrozen": "Is the asset frozen or not",
}
@@ -515,11 +523,6 @@ var assetParamsFieldDocs = map[string]string{
"AssetCreator": "Creator address",
}
-// AssetParamsFieldDocs are notes on fields available in `asset_params_get` with extra versioning info if any
-func AssetParamsFieldDocs() map[string]string {
- return fieldsDocWithExtra(assetParamsFieldDocs, assetParamsFieldSpecByName)
-}
-
// appParamsFieldDocs are notes on fields available in `app_params_get`
var appParamsFieldDocs = map[string]string{
"AppApprovalProgram": "Bytecode of Approval Program",
@@ -533,9 +536,11 @@ var appParamsFieldDocs = map[string]string{
"AppAddress": "Address for which this application has authority",
}
-// AppParamsFieldDocs are notes on fields available in `app_params_get` with extra versioning info if any
-func AppParamsFieldDocs() map[string]string {
- return fieldsDocWithExtra(appParamsFieldDocs, appParamsFieldSpecByName)
+// acctParamsFieldDocs are notes on fields available in `app_params_get`
+var acctParamsFieldDocs = map[string]string{
+ "AcctBalance": "Account balance in microalgos",
+ "AcctMinBalance": "Minimum required blance for account, in microalgos",
+ "AcctAuthAddr": "Address the account is rekeyed to.",
}
// EcdsaCurveDocs are notes on curves available in `ecdsa_` opcodes
diff --git a/data/transactions/logic/doc_test.go b/data/transactions/logic/doc_test.go
index 1bbfd3cc2..3287755f6 100644
--- a/data/transactions/logic/doc_test.go
+++ b/data/transactions/logic/doc_test.go
@@ -21,6 +21,7 @@ import (
"testing"
"github.com/algorand/go-algorand/test/partitiontest"
+ "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -32,24 +33,20 @@ func TestOpDocs(t *testing.T) {
opsSeen[op.Name] = false
}
for name := range opDocByName {
- _, exists := opsSeen[name]
- if !exists {
- t.Errorf("error: doc for op %#v that does not exist in OpSpecs", name)
- }
+ assert.Contains(t, opsSeen, name, "opDocByName contains strange opcode %#v", name)
opsSeen[name] = true
}
for op, seen := range opsSeen {
- if !seen {
- t.Errorf("error: doc for op %#v missing from opDocByName", op)
- }
+ assert.True(t, seen, "opDocByName is missing doc for %#v", op)
}
require.Len(t, txnFieldDocs, len(TxnFieldNames))
require.Len(t, onCompletionDescriptions, len(OnCompletionNames))
require.Len(t, globalFieldDocs, len(GlobalFieldNames))
- require.Len(t, AssetHoldingFieldDocs, len(AssetHoldingFieldNames))
+ require.Len(t, assetHoldingFieldDocs, len(AssetHoldingFieldNames))
require.Len(t, assetParamsFieldDocs, len(AssetParamsFieldNames))
require.Len(t, appParamsFieldDocs, len(AppParamsFieldNames))
+ require.Len(t, acctParamsFieldDocs, len(AcctParamsFieldNames))
require.Len(t, TypeNameDescriptions, len(TxnTypeNames))
require.Len(t, EcdsaCurveDocs, len(EcdsaCurveNames))
}
@@ -119,10 +116,10 @@ func TestAllImmediatesDocumented(t *testing.T) {
note := OpImmediateNote(op.Name)
if count == 1 && op.Details.Immediates[0].kind >= immBytes {
// More elaborate than can be checked by easy count.
- require.NotEmpty(t, note)
+ assert.NotEmpty(t, note)
continue
}
- require.Equal(t, count, strings.Count(note, "{"), "%s immediates doc is wrong", op.Name)
+ assert.Equal(t, count, strings.Count(note, "{"), "opcodeImmediateNotes for %s is wrong", op.Name)
}
}
@@ -158,20 +155,3 @@ func TestOnCompletionDescription(t *testing.T) {
desc = OnCompletionDescription(100)
require.Equal(t, "invalid constant value", desc)
}
-
-func TestFieldDocs(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- txnFields := TxnFieldDocs()
- require.Greater(t, len(txnFields), 0)
-
- globalFields := GlobalFieldDocs()
- require.Greater(t, len(globalFields), 0)
-
- doc := globalFields["MinTxnFee"]
- require.NotContains(t, doc, "LogicSigVersion >= 2")
-
- doc = globalFields["Round"]
- require.Contains(t, doc, "LogicSigVersion >= 2")
-
-}
diff --git a/data/transactions/logic/eval.go b/data/transactions/logic/eval.go
index 0157e724d..e3d39535d 100644
--- a/data/transactions/logic/eval.go
+++ b/data/transactions/logic/eval.go
@@ -25,7 +25,6 @@ import (
"encoding/hex"
"errors"
"fmt"
- "io"
"math"
"math/big"
"math/bits"
@@ -49,7 +48,7 @@ const EvalMaxVersion = LogicVersion
// The constants below control TEAL opcodes evaluation and MAY NOT be changed
// without moving them into consensus parameters.
-// MaxStringSize is the limit of byte strings created by `concat`
+// MaxStringSize is the limit of byte string length in an AVM value
const MaxStringSize = 4096
// MaxByteMathSize is the limit of byte strings supplied as input to byte math opcodes
@@ -116,6 +115,16 @@ func (sv *stackValue) uint() (uint64, error) {
return sv.Uint, nil
}
+func (sv *stackValue) uintMaxed(max uint64) (uint64, error) {
+ if sv.Bytes != nil {
+ return 0, fmt.Errorf("%#v is not a uint64", sv.Bytes)
+ }
+ if sv.Uint > max {
+ return 0, fmt.Errorf("%d is larger than max=%d", sv.Uint, max)
+ }
+ return sv.Uint, nil
+}
+
func (sv *stackValue) bool() (bool, error) {
u64, err := sv.uint()
if err != nil {
@@ -158,8 +167,10 @@ func stackValueFromTealValue(tv *basics.TealValue) (sv stackValue, err error) {
// newly-introduced transaction fields from breaking assumptions made by older
// versions of TEAL. If one of the transactions in a group will execute a TEAL
// program whose version predates a given field, that field must not be set
-// anywhere in the transaction group, or the group will be rejected.
-func ComputeMinTealVersion(group []transactions.SignedTxn) uint64 {
+// anywhere in the transaction group, or the group will be rejected. In
+// addition, inner app calls must not call teal from before inner app calls were
+// introduced.
+func ComputeMinTealVersion(group []transactions.SignedTxnWithAD, inner bool) uint64 {
var minVersion uint64
for _, txn := range group {
if !txn.Txn.RekeyTo.IsZero() {
@@ -172,6 +183,11 @@ func ComputeMinTealVersion(group []transactions.SignedTxn) uint64 {
minVersion = appsEnabledVersion
}
}
+ if inner {
+ if minVersion < innerAppsEnabledVersion {
+ minVersion = innerAppsEnabledVersion
+ }
+ }
}
return minVersion
}
@@ -194,65 +210,38 @@ type LedgerForLogic interface {
AssetHolding(addr basics.Address, assetIdx basics.AssetIndex) (basics.AssetHolding, error)
AssetParams(aidx basics.AssetIndex) (basics.AssetParams, basics.Address, error)
AppParams(aidx basics.AppIndex) (basics.AppParams, basics.Address, error)
- ApplicationID() basics.AppIndex
OptedIn(addr basics.Address, appIdx basics.AppIndex) (bool, error)
- GetCreatableID(groupIdx int) basics.CreatableIndex
GetLocal(addr basics.Address, appIdx basics.AppIndex, key string, accountIdx uint64) (value basics.TealValue, exists bool, err error)
- SetLocal(addr basics.Address, key string, value basics.TealValue, accountIdx uint64) error
- DelLocal(addr basics.Address, key string, accountIdx uint64) error
+ SetLocal(addr basics.Address, appIdx basics.AppIndex, key string, value basics.TealValue, accountIdx uint64) error
+ DelLocal(addr basics.Address, appIdx basics.AppIndex, key string, accountIdx uint64) error
GetGlobal(appIdx basics.AppIndex, key string) (value basics.TealValue, exists bool, err error)
- SetGlobal(key string, value basics.TealValue) error
- DelGlobal(key string) error
-
- GetDelta(txn *transactions.Transaction) (evalDelta transactions.EvalDelta, err error)
-
- Perform(txn *transactions.Transaction, spec transactions.SpecialAddresses) (transactions.ApplyData, error)
-}
-
-// EvalSideEffects contains data returned from evaluation
-type EvalSideEffects struct {
- scratchSpace scratchSpace
-}
-
-// MakePastSideEffects allocates and initializes a slice of EvalSideEffects of length `size`
-func MakePastSideEffects(size int) (pastSideEffects []EvalSideEffects) {
- pastSideEffects = make([]EvalSideEffects, size)
- for j := range pastSideEffects {
- pastSideEffects[j] = EvalSideEffects{}
- }
- return
-}
+ SetGlobal(appIdx basics.AppIndex, key string, value basics.TealValue) error
+ DelGlobal(appIdx basics.AppIndex, key string) error
-// getScratchValue loads and clones a stackValue
-// The value is cloned so the original bytes are protected from changes
-func (se *EvalSideEffects) getScratchValue(scratchPos uint8) stackValue {
- return se.scratchSpace[scratchPos].clone()
+ Perform(gi int, ep *EvalParams) error
+ Counter() uint64
}
-// setScratchSpace stores the scratch space
-func (se *EvalSideEffects) setScratchSpace(scratch scratchSpace) {
- se.scratchSpace = scratch
+// resources contains a list of apps and assets. It's used to track the apps and
+// assets created by a txgroup, for "free" access.
+type resources struct {
+ asas []basics.AssetIndex
+ apps []basics.AppIndex
}
// EvalParams contains data that comes into condition evaluation.
type EvalParams struct {
- // the transaction being evaluated
- Txn *transactions.SignedTxn
-
Proto *config.ConsensusParams
- Trace io.Writer
-
- TxnGroup []transactions.SignedTxn
+ Trace *strings.Builder
- // GroupIndex should point to Txn within TxnGroup
- GroupIndex uint64
+ TxnGroup []transactions.SignedTxnWithAD
- PastSideEffects []EvalSideEffects
+ pastScratch []*scratchSpace
- Logger logging.Logger
+ logger logging.Logger
Ledger LedgerForLogic
@@ -264,19 +253,121 @@ type EvalParams struct {
// MinTealVersion is nil, we will compute it ourselves
MinTealVersion *uint64
- // Amount "overpaid" by the top-level transactions of the
- // group. Often 0. When positive, it is spent by application
- // actions. Shared value across a group's txns, so that it
- // can be updated. nil is interpretted as 0.
+ // Amount "overpaid" by the transactions of the group. Often 0. When
+ // positive, it can be spent by inner transactions. Shared across a group's
+ // txns, so that it can be updated (including upward, by overpaying inner
+ // transactions). nil is treated as 0 (used before fee pooling is enabled).
FeeCredit *uint64
Specials *transactions.SpecialAddresses
- // determines eval mode: runModeSignature or runModeApplication
- runModeFlags runMode
-
- // Total pool of app call budget in a group transaction
+ // Total pool of app call budget in a group transaction (nil before budget pooling enabled)
PooledApplicationBudget *uint64
+
+ // Total allowable inner txns in a group transaction (nil before inner pooling enabled)
+ pooledAllowedInners *int
+
+ // created contains resources that may be used for "created" - they need not be in
+ // a foreign array. They remain empty until createdResourcesVersion.
+ created *resources
+
+ // Caching these here means the hashes can be shared across the TxnGroup
+ // (and inners, because the cache is shared with the inner EvalParams)
+ appAddrCache map[basics.AppIndex]basics.Address
+
+ // Cache the txid hashing, but do *not* share this into inner EvalParams, as
+ // the key is just the index in the txgroup.
+ txidCache map[int]transactions.Txid
+
+ // The calling context, if this is an inner app call
+ caller *EvalContext
+}
+
+func copyWithClearAD(txgroup []transactions.SignedTxnWithAD) []transactions.SignedTxnWithAD {
+ copy := make([]transactions.SignedTxnWithAD, len(txgroup))
+ for i := range txgroup {
+ copy[i].SignedTxn = txgroup[i].SignedTxn
+ // leave copy[i].ApplyData clear
+ }
+ return copy
+}
+
+// NewEvalParams creates an EvalParams to use while evaluating a top-level txgroup
+func NewEvalParams(txgroup []transactions.SignedTxnWithAD, proto *config.ConsensusParams, specials *transactions.SpecialAddresses) *EvalParams {
+ apps := 0
+ for _, tx := range txgroup {
+ if tx.Txn.Type == protocol.ApplicationCallTx {
+ apps++
+ }
+ }
+
+ minTealVersion := ComputeMinTealVersion(txgroup, false)
+
+ var pooledApplicationBudget *uint64
+ var pooledAllowedInners *int
+
+ credit, _ := transactions.FeeCredit(txgroup, proto.MinTxnFee)
+
+ if proto.EnableAppCostPooling {
+ pooledApplicationBudget = new(uint64)
+ *pooledApplicationBudget = uint64(apps * proto.MaxAppProgramCost)
+ }
+
+ if proto.EnableInnerTransactionPooling {
+ pooledAllowedInners = new(int)
+ *pooledAllowedInners = proto.MaxTxGroupSize * proto.MaxInnerTransactions
+ }
+
+ return &EvalParams{
+ TxnGroup: copyWithClearAD(txgroup),
+ Proto: proto,
+ Specials: specials,
+ pastScratch: make([]*scratchSpace, len(txgroup)),
+ MinTealVersion: &minTealVersion,
+ FeeCredit: &credit,
+ PooledApplicationBudget: pooledApplicationBudget,
+ pooledAllowedInners: pooledAllowedInners,
+ created: &resources{},
+ appAddrCache: make(map[basics.AppIndex]basics.Address),
+ }
+}
+
+// NewInnerEvalParams creates an EvalParams to be used while evaluating an inner group txgroup
+func NewInnerEvalParams(txg []transactions.SignedTxn, caller *EvalContext) *EvalParams {
+ txgroup := transactions.WrapSignedTxnsWithAD(txg)
+
+ minTealVersion := ComputeMinTealVersion(txgroup, true)
+ // Can't happen now, since innerAppsEnabledVersion > than any minimum
+ // imposed otherwise. But is correct to check.
+ if minTealVersion < *caller.MinTealVersion {
+ minTealVersion = *caller.MinTealVersion
+ }
+ credit, _ := transactions.FeeCredit(txgroup, caller.Proto.MinTxnFee)
+ *caller.FeeCredit = basics.AddSaturate(*caller.FeeCredit, credit)
+
+ if caller.Proto.EnableAppCostPooling {
+ for _, tx := range txgroup {
+ if tx.Txn.Type == protocol.ApplicationCallTx {
+ *caller.PooledApplicationBudget += uint64(caller.Proto.MaxAppProgramCost)
+ }
+ }
+ }
+
+ ep := &EvalParams{
+ Proto: caller.Proto,
+ TxnGroup: copyWithClearAD(txgroup),
+ pastScratch: make([]*scratchSpace, len(txgroup)),
+ MinTealVersion: &minTealVersion,
+ FeeCredit: caller.FeeCredit,
+ Specials: caller.Specials,
+ PooledApplicationBudget: caller.PooledApplicationBudget,
+ pooledAllowedInners: caller.pooledAllowedInners,
+ Ledger: caller.Ledger,
+ created: caller.created,
+ appAddrCache: caller.appAddrCache,
+ caller: caller,
+ }
+ return ep
}
type opEvalFunc func(cx *EvalContext)
@@ -312,35 +403,55 @@ func (r runMode) String() string {
return "Unknown"
}
-func (ep EvalParams) budget() int {
- if ep.runModeFlags == runModeSignature {
- return int(ep.Proto.LogicSigMaxCost)
- }
- if ep.Proto.EnableAppCostPooling && ep.PooledApplicationBudget != nil {
- return int(*ep.PooledApplicationBudget)
+func (ep EvalParams) log() logging.Logger {
+ if ep.logger != nil {
+ return ep.logger
}
- return ep.Proto.MaxAppProgramCost
+ return logging.Base()
}
-func (ep EvalParams) log() logging.Logger {
- if ep.Logger != nil {
- return ep.Logger
+// RecordAD notes ApplyData information that was derived outside of the logic
+// package. For example, after a acfg transaction is processed, the AD created
+// by the acfg is added to the EvalParams this way.
+func (ep *EvalParams) RecordAD(gi int, ad transactions.ApplyData) {
+ ep.TxnGroup[gi].ApplyData = ad
+ if aid := ad.ConfigAsset; aid != 0 {
+ ep.created.asas = append(ep.created.asas, aid)
+ }
+ if aid := ad.ApplicationID; aid != 0 {
+ ep.created.apps = append(ep.created.apps, aid)
}
- return logging.Base()
}
-type scratchSpace = [256]stackValue
+type scratchSpace [256]stackValue
-// EvalContext is the execution context of AVM bytecode. It contains
-// the full state of the running program, and tracks some of the
-// things that the program has been done, like log message and inner
-// transactions.
+// EvalContext is the execution context of AVM bytecode. It contains the full
+// state of the running program, and tracks some of the things that the program
+// has done, like log messages and inner transactions.
type EvalContext struct {
- EvalParams
+ *EvalParams
+
+ // determines eval mode: runModeSignature or runModeApplication
+ runModeFlags runMode
+
+ // the index of the transaction being evaluated
+ GroupIndex int
+ // the transaction being evaluated (initialized from GroupIndex + ep.TxnGroup)
+ Txn *transactions.SignedTxnWithAD
+
+ // Txn.EvalDelta maintains a summary of changes as we go. We used to
+ // compute this from the ledger after a full eval. But now apps can call
+ // apps. When they do, all of the changes accumulate into the parent's
+ // ledger, but Txn.EvalDelta should only have the changes from *this*
+ // call. (The changes caused by children are deeper inside - in the
+ // EvalDeltas of the InnerTxns inside this EvalDelta) Nice bonus - by
+ // keeping the running changes, the debugger can be changed to display them
+ // as the app runs.
stack []stackValue
callstack []int
+ appID basics.AppIndex
program []byte
pc int
nextpc int
@@ -351,12 +462,8 @@ type EvalContext struct {
scratch scratchSpace
subtxns []transactions.SignedTxn // place to build for itxn_submit
- // Previous transactions Performed() and their effects
- InnerTxns []transactions.SignedTxnWithAD
-
- cost int // cost incurred so far
- Logs []string
- logSize int // total log size so far
+ cost int // cost incurred so far
+ logSize int // total log size so far
// Set of PC values that branches we've seen so far might
// go. So, if checkStep() skips one, that branch is trying to
@@ -369,8 +476,6 @@ type EvalContext struct {
instructionStarts map[int]bool
programHashCached crypto.Digest
- txidCache map[uint64]transactions.Txid
- appAddrCache map[basics.AppIndex]basics.Address
// Stores state & disassembly for the optional debugger
debugState DebugState
@@ -408,6 +513,15 @@ func (st StackType) String() string {
return "internal error, unknown type"
}
+// Typed tells whether the StackType is a specific concrete type.
+func (st StackType) Typed() bool {
+ switch st {
+ case StackUint64, StackBytes:
+ return true
+ }
+ return false
+}
+
func (sts StackTypes) plus(other StackTypes) StackTypes {
return append(sts, other...)
}
@@ -425,41 +539,51 @@ func (pe PanicError) Error() string {
var errLogicSigNotSupported = errors.New("LogicSig not supported")
var errTooManyArgs = errors.New("LogicSig has too many arguments")
-// EvalStatefulCx executes stateful TEAL program
-func EvalStatefulCx(program []byte, params EvalParams) (bool, *EvalContext, error) {
- var cx EvalContext
- cx.EvalParams = params
- cx.runModeFlags = runModeApplication
+// EvalContract executes stateful TEAL program as the gi'th transaction in params
+func EvalContract(program []byte, gi int, aid basics.AppIndex, params *EvalParams) (bool, *EvalContext, error) {
+ if params.Ledger == nil {
+ return false, nil, errors.New("no ledger in contract eval")
+ }
+ cx := EvalContext{
+ EvalParams: params,
+ runModeFlags: runModeApplication,
+ GroupIndex: gi,
+ Txn: &params.TxnGroup[gi],
+ appID: aid,
+ }
pass, err := eval(program, &cx)
- // The following two updates show a need for something like a
- // GroupEvalContext, as we are currently tucking things into the
- // EvalParams so that they are available to later calls.
-
- // update pooled budget
- if cx.Proto.EnableAppCostPooling && cx.PooledApplicationBudget != nil {
- // if eval passes, then budget is always greater than cost, so should not have underflow
+ // update pooled budget (shouldn't overflow, but being careful anyway)
+ if cx.PooledApplicationBudget != nil {
*cx.PooledApplicationBudget = basics.SubSaturate(*cx.PooledApplicationBudget, uint64(cx.cost))
}
+ // update allowed inner transactions (shouldn't overflow, but it's an int, so safe anyway)
+ if cx.pooledAllowedInners != nil {
+ *cx.pooledAllowedInners -= len(cx.Txn.EvalDelta.InnerTxns)
+ }
// update side effects
- cx.PastSideEffects[cx.GroupIndex].setScratchSpace(cx.scratch)
+ cx.pastScratch[cx.GroupIndex] = &scratchSpace{}
+ *cx.pastScratch[cx.GroupIndex] = cx.scratch
return pass, &cx, err
}
-// EvalStateful is a lighter weight interface that doesn't return the EvalContext
-func EvalStateful(program []byte, params EvalParams) (bool, error) {
- pass, _, err := EvalStatefulCx(program, params)
+// EvalApp is a lighter weight interface that doesn't return the EvalContext
+func EvalApp(program []byte, gi int, aid basics.AppIndex, params *EvalParams) (bool, error) {
+ pass, _, err := EvalContract(program, gi, aid, params)
return pass, err
}
-// Eval checks to see if a transaction passes logic
+// EvalSignature evaluates the logicsig of the ith transaction in params.
// A program passes successfully if it finishes with one int element on the stack that is non-zero.
-func Eval(program []byte, params EvalParams) (pass bool, err error) {
- var cx EvalContext
- cx.EvalParams = params
- cx.runModeFlags = runModeSignature
- return eval(program, &cx)
+func EvalSignature(gi int, params *EvalParams) (pass bool, err error) {
+ cx := EvalContext{
+ EvalParams: params,
+ runModeFlags: runModeSignature,
+ GroupIndex: gi,
+ Txn: &params.TxnGroup[gi],
+ }
+ return eval(cx.Txn.Lsig.Logic, &cx)
}
// eval implementation
@@ -471,10 +595,8 @@ func eval(program []byte, cx *EvalContext) (pass bool, err error) {
stlen := runtime.Stack(buf, false)
pass = false
errstr := string(buf[:stlen])
- if cx.EvalParams.Trace != nil {
- if sb, ok := cx.EvalParams.Trace.(*strings.Builder); ok {
- errstr += sb.String()
- }
+ if cx.Trace != nil {
+ errstr += cx.Trace.String()
}
err = PanicError{x, errstr}
cx.EvalParams.log().Errorf("recovered panic in Eval: %w", err)
@@ -495,44 +617,23 @@ func eval(program []byte, cx *EvalContext) (pass bool, err error) {
err = errLogicSigNotSupported
return
}
- if cx.EvalParams.Txn.Lsig.Args != nil && len(cx.EvalParams.Txn.Lsig.Args) > transactions.EvalMaxArgs {
+ if cx.Txn.Lsig.Args != nil && len(cx.Txn.Lsig.Args) > transactions.EvalMaxArgs {
err = errTooManyArgs
return
}
- if len(program) == 0 {
- cx.err = errors.New("invalid program (empty)")
- return false, cx.err
- }
- version, vlen := binary.Uvarint(program)
- if vlen <= 0 {
- cx.err = errors.New("invalid version")
- return false, cx.err
- }
- if version > EvalMaxVersion {
- cx.err = fmt.Errorf("program version %d greater than max supported version %d", version, EvalMaxVersion)
- return false, cx.err
- }
- if version > cx.EvalParams.Proto.LogicSigVersion {
- cx.err = fmt.Errorf("program version %d greater than protocol supported version %d", version, cx.EvalParams.Proto.LogicSigVersion)
- return false, cx.err
- }
-
- var minVersion uint64
- if cx.EvalParams.MinTealVersion == nil {
- minVersion = ComputeMinTealVersion(cx.EvalParams.TxnGroup)
- } else {
- minVersion = *cx.EvalParams.MinTealVersion
- }
- if version < minVersion {
- cx.err = fmt.Errorf("program version must be >= %d for this transaction group, but have version %d", minVersion, version)
- return false, cx.err
+ version, vlen, err := versionCheck(program, cx.EvalParams)
+ if err != nil {
+ cx.err = err
+ return false, err
}
cx.version = version
cx.pc = vlen
cx.stack = make([]stackValue, 0, 10)
cx.program = program
+ cx.Txn.EvalDelta.GlobalDelta = basics.StateDelta{}
+ cx.Txn.EvalDelta.LocalDeltas = make(map[uint64]basics.StateDelta)
if cx.Debugger != nil {
cx.debugState = makeDebugState(cx)
@@ -573,34 +674,29 @@ func eval(program []byte, cx *EvalContext) (pass bool, err error) {
return cx.stack[0].Uint != 0, nil
}
-// CheckStateful should be faster than EvalStateful. It can perform
+// CheckContract should be faster than EvalContract. It can perform
// static checks and reject programs that are invalid. Prior to v4,
// these static checks include a cost estimate that must be low enough
// (controlled by params.Proto).
-func CheckStateful(program []byte, params EvalParams) error {
- params.runModeFlags = runModeApplication
- return check(program, params)
+func CheckContract(program []byte, params *EvalParams) error {
+ return check(program, params, runModeApplication)
}
-// Check should be faster than Eval. It can perform static checks and
-// reject programs that are invalid. Prior to v4, these static checks
-// include a cost estimate that must be low enough (controlled by
-// params.Proto).
-func Check(program []byte, params EvalParams) error {
- params.runModeFlags = runModeSignature
- return check(program, params)
+// CheckSignature should be faster than EvalSignature. It can perform static
+// checks and reject programs that are invalid. Prior to v4, these static checks
+// include a cost estimate that must be low enough (controlled by params.Proto).
+func CheckSignature(gi int, params *EvalParams) error {
+ return check(params.TxnGroup[gi].Lsig.Logic, params, runModeSignature)
}
-func check(program []byte, params EvalParams) (err error) {
+func check(program []byte, params *EvalParams, mode runMode) (err error) {
defer func() {
if x := recover(); x != nil {
buf := make([]byte, 16*1024)
stlen := runtime.Stack(buf, false)
errstr := string(buf[:stlen])
if params.Trace != nil {
- if sb, ok := params.Trace.(*strings.Builder); ok {
- errstr += sb.String()
- }
+ errstr += params.Trace.String()
}
err = PanicError{x, errstr}
params.log().Errorf("recovered panic in Check: %s", err)
@@ -609,36 +705,22 @@ func check(program []byte, params EvalParams) (err error) {
if (params.Proto == nil) || (params.Proto.LogicSigVersion == 0) {
return errLogicSigNotSupported
}
- version, vlen := binary.Uvarint(program)
- if vlen <= 0 {
- return errors.New("invalid version")
- }
- if version > EvalMaxVersion {
- return fmt.Errorf("program version %d greater than max supported version %d", version, EvalMaxVersion)
- }
- if version > params.Proto.LogicSigVersion {
- return fmt.Errorf("program version %d greater than protocol supported version %d", version, params.Proto.LogicSigVersion)
- }
- var minVersion uint64
- if params.MinTealVersion == nil {
- minVersion = ComputeMinTealVersion(params.TxnGroup)
- } else {
- minVersion = *params.MinTealVersion
- }
- if version < minVersion {
- return fmt.Errorf("program version must be >= %d for this transaction group, but have version %d", minVersion, version)
+ version, vlen, err := versionCheck(program, params)
+ if err != nil {
+ return err
}
var cx EvalContext
cx.version = version
cx.pc = vlen
cx.EvalParams = params
+ cx.runModeFlags = mode
cx.program = program
cx.branchTargets = make(map[int]bool)
cx.instructionStarts = make(map[int]bool)
- maxCost := params.budget()
+ maxCost := cx.budget()
if version >= backBranchEnabledVersion {
maxCost = math.MaxInt32
}
@@ -664,6 +746,31 @@ func check(program []byte, params EvalParams) (err error) {
return nil
}
+func versionCheck(program []byte, params *EvalParams) (uint64, int, error) {
+ if len(program) == 0 {
+ return 0, 0, errors.New("invalid program (empty)")
+ }
+ version, vlen := binary.Uvarint(program)
+ if vlen <= 0 {
+ return 0, 0, errors.New("invalid version")
+ }
+ if version > EvalMaxVersion {
+ return 0, 0, fmt.Errorf("program version %d greater than max supported version %d", version, EvalMaxVersion)
+ }
+ if version > params.Proto.LogicSigVersion {
+ return 0, 0, fmt.Errorf("program version %d greater than protocol supported version %d", version, params.Proto.LogicSigVersion)
+ }
+
+ if params.MinTealVersion == nil {
+ minVersion := ComputeMinTealVersion(params.TxnGroup, params.caller != nil)
+ params.MinTealVersion = &minVersion
+ }
+ if version < *params.MinTealVersion {
+ return 0, 0, fmt.Errorf("program version must be >= %d for this transaction group, but have version %d", *params.MinTealVersion, version)
+ }
+ return version, vlen, nil
+}
+
func opCompat(expected, got StackType) bool {
if expected == StackAny {
return true
@@ -685,9 +792,26 @@ func boolToUint(x bool) uint64 {
return 0
}
-// MaxStackDepth should move to consensus params
+// MaxStackDepth should not change unless gated by a teal version change / consensus upgrade.
const MaxStackDepth = 1000
+func (cx *EvalContext) budget() int {
+ if cx.runModeFlags == runModeSignature {
+ return int(cx.Proto.LogicSigMaxCost)
+ }
+ if cx.Proto.EnableAppCostPooling && cx.PooledApplicationBudget != nil {
+ return int(*cx.PooledApplicationBudget)
+ }
+ return cx.Proto.MaxAppProgramCost
+}
+
+func (cx *EvalContext) allowedInners() int {
+ if cx.Proto.EnableInnerTransactionPooling && cx.pooledAllowedInners != nil {
+ return *cx.pooledAllowedInners
+ }
+ return cx.Proto.MaxInnerTransactions
+}
+
func (cx *EvalContext) step() {
opcode := cx.program[cx.pc]
spec := &opsByOpcode[cx.version][opcode]
@@ -1035,11 +1159,7 @@ func opLt(cx *EvalContext) {
last := len(cx.stack) - 1
prev := last - 1
cond := cx.stack[prev].Uint < cx.stack[last].Uint
- if cond {
- cx.stack[prev].Uint = 1
- } else {
- cx.stack[prev].Uint = 0
- }
+ cx.stack[prev].Uint = boolToUint(cond)
cx.stack = cx.stack[:last]
}
@@ -1062,11 +1182,7 @@ func opAnd(cx *EvalContext) {
last := len(cx.stack) - 1
prev := last - 1
cond := (cx.stack[prev].Uint != 0) && (cx.stack[last].Uint != 0)
- if cond {
- cx.stack[prev].Uint = 1
- } else {
- cx.stack[prev].Uint = 0
- }
+ cx.stack[prev].Uint = boolToUint(cond)
cx.stack = cx.stack[:last]
}
@@ -1074,11 +1190,7 @@ func opOr(cx *EvalContext) {
last := len(cx.stack) - 1
prev := last - 1
cond := (cx.stack[prev].Uint != 0) || (cx.stack[last].Uint != 0)
- if cond {
- cx.stack[prev].Uint = 1
- } else {
- cx.stack[prev].Uint = 0
- }
+ cx.stack[prev].Uint = boolToUint(cond)
cx.stack = cx.stack[:last]
}
@@ -1097,11 +1209,7 @@ func opEq(cx *EvalContext) {
} else {
cond = cx.stack[prev].Uint == cx.stack[last].Uint
}
- if cond {
- cx.stack[prev].Uint = 1
- } else {
- cx.stack[prev].Uint = 0
- }
+ cx.stack[prev].Uint = boolToUint(cond)
cx.stack[prev].Bytes = nil
cx.stack = cx.stack[:last]
}
@@ -1114,11 +1222,7 @@ func opNeq(cx *EvalContext) {
func opNot(cx *EvalContext) {
last := len(cx.stack) - 1
cond := cx.stack[last].Uint == 0
- if cond {
- cx.stack[last].Uint = 1
- } else {
- cx.stack[last].Uint = 0
- }
+ cx.stack[last].Uint = boolToUint(cond)
}
func opLen(cx *EvalContext) {
@@ -1385,6 +1489,19 @@ func opBytesMul(cx *EvalContext) {
opBytesBinOp(cx, result, result.Mul)
}
+func opBytesSqrt(cx *EvalContext) {
+ last := len(cx.stack) - 1
+
+ if len(cx.stack[last].Bytes) > MaxByteMathSize {
+ cx.err = errors.New("math attempted on large byte-array")
+ return
+ }
+
+ val := new(big.Int).SetBytes(cx.stack[last].Bytes)
+ val.Sqrt(val)
+ cx.stack[last].Bytes = val.Bytes()
+}
+
func opBytesLt(cx *EvalContext) {
last := len(cx.stack) - 1
prev := last - 1
@@ -1397,11 +1514,7 @@ func opBytesLt(cx *EvalContext) {
rhs := new(big.Int).SetBytes(cx.stack[last].Bytes)
lhs := new(big.Int).SetBytes(cx.stack[prev].Bytes)
cx.stack[prev].Bytes = nil
- if lhs.Cmp(rhs) < 0 {
- cx.stack[prev].Uint = 1
- } else {
- cx.stack[prev].Uint = 0
- }
+ cx.stack[prev].Uint = boolToUint(lhs.Cmp(rhs) < 0)
cx.stack = cx.stack[:last]
}
@@ -1432,11 +1545,7 @@ func opBytesEq(cx *EvalContext) {
rhs := new(big.Int).SetBytes(cx.stack[last].Bytes)
lhs := new(big.Int).SetBytes(cx.stack[prev].Bytes)
cx.stack[prev].Bytes = nil
- if lhs.Cmp(rhs) == 0 {
- cx.stack[prev].Uint = 1
- } else {
- cx.stack[prev].Uint = 0
- }
+ cx.stack[prev].Uint = boolToUint(lhs.Cmp(rhs) == 0)
cx.stack = cx.stack[:last]
}
@@ -1884,62 +1993,58 @@ func TxnFieldToTealValue(txn *transactions.Transaction, groupIndex int, field Tx
if groupIndex < 0 {
return basics.TealValue{}, fmt.Errorf("negative groupIndex %d", groupIndex)
}
- cx := EvalContext{EvalParams: EvalParams{GroupIndex: uint64(groupIndex)}}
+ cx := EvalContext{
+ GroupIndex: groupIndex,
+ Txn: &transactions.SignedTxnWithAD{SignedTxn: transactions.SignedTxn{Txn: *txn}},
+ }
fs := txnFieldSpecByField[field]
- sv, err := cx.txnFieldToStack(txn, fs, arrayFieldIdx, uint64(groupIndex))
+ sv, err := cx.txnFieldToStack(cx.Txn, &fs, arrayFieldIdx, groupIndex, false)
return sv.toTealValue(), err
}
-func (cx *EvalContext) getTxID(txn *transactions.Transaction, groupIndex uint64) transactions.Txid {
+func (cx *EvalContext) getTxID(txn *transactions.Transaction, groupIndex int) transactions.Txid {
+ if cx.EvalParams == nil { // Special case, called through TxnFieldToTealValue. No EvalParams, no caching.
+ return txn.ID()
+ }
+
// Initialize txidCache if necessary
- if cx.txidCache == nil {
- cx.txidCache = make(map[uint64]transactions.Txid, len(cx.TxnGroup))
+ if cx.EvalParams.txidCache == nil {
+ cx.EvalParams.txidCache = make(map[int]transactions.Txid, len(cx.TxnGroup))
}
// Hashes are expensive, so we cache computed TxIDs
- txid, ok := cx.txidCache[groupIndex]
+ txid, ok := cx.EvalParams.txidCache[groupIndex]
if !ok {
- txid = txn.ID()
- cx.txidCache[groupIndex] = txid
+ if cx.caller != nil {
+ innerOffset := len(cx.caller.Txn.EvalDelta.InnerTxns)
+ txid = txn.InnerID(cx.caller.Txn.ID(), innerOffset+groupIndex)
+ } else {
+ txid = txn.ID()
+ }
+ cx.EvalParams.txidCache[groupIndex] = txid
}
return txid
}
-func (cx *EvalContext) itxnFieldToStack(itxn *transactions.SignedTxnWithAD, fs txnFieldSpec, arrayFieldIdx uint64) (sv stackValue, err error) {
+func (cx *EvalContext) txnFieldToStack(stxn *transactions.SignedTxnWithAD, fs *txnFieldSpec, arrayFieldIdx uint64, groupIndex int, inner bool) (sv stackValue, err error) {
if fs.effects {
- switch fs.field {
- case Logs:
- if arrayFieldIdx >= uint64(len(itxn.EvalDelta.Logs)) {
- err = fmt.Errorf("invalid Logs index %d", arrayFieldIdx)
- return
- }
- sv.Bytes = nilToEmpty([]byte(itxn.EvalDelta.Logs[arrayFieldIdx]))
- case NumLogs:
- sv.Uint = uint64(len(itxn.EvalDelta.Logs))
- case CreatedAssetID:
- sv.Uint = uint64(itxn.ApplyData.ConfigAsset)
- case CreatedApplicationID:
- sv.Uint = uint64(itxn.ApplyData.ApplicationID)
- default:
- err = fmt.Errorf("invalid txn field %d", fs.field)
+ if cx.runModeFlags == runModeSignature {
+ return sv, fmt.Errorf("txn[%s] not allowed in current mode", fs.field)
+ }
+ if cx.version < txnEffectsVersion && !inner {
+ return sv, errors.New("Unable to obtain effects from top-level transactions")
}
- return
- }
-
- if fs.field == GroupIndex || fs.field == TxID {
- err = fmt.Errorf("illegal field for inner transaction %s", fs.field)
- } else {
- sv, err = cx.txnFieldToStack(&itxn.Txn, fs, arrayFieldIdx, 0)
}
- return
-}
-
-func (cx *EvalContext) txnFieldToStack(txn *transactions.Transaction, fs txnFieldSpec, arrayFieldIdx uint64, groupIndex uint64) (sv stackValue, err error) {
- if fs.effects {
- return sv, errors.New("Unable to obtain effects from top-level transactions")
+ if inner {
+ // Before we had inner apps, we did not allow these, since we had no inner groups.
+ if cx.version < innerAppsEnabledVersion && (fs.field == GroupIndex || fs.field == TxID) {
+ err = fmt.Errorf("illegal field for inner transaction %s", fs.field)
+ return
+ }
}
err = nil
+ txn := &stxn.SignedTxn.Txn
switch fs.field {
case Sender:
sv.Bytes = txn.Sender[:]
@@ -1984,7 +2089,7 @@ func (cx *EvalContext) txnFieldToStack(txn *transactions.Transaction, fs txnFiel
case AssetCloseTo:
sv.Bytes = txn.AssetCloseTo[:]
case GroupIndex:
- sv.Uint = groupIndex
+ sv.Uint = uint64(groupIndex)
case TxID:
txid := cx.getTxID(txn, groupIndex)
sv.Bytes = txid[:]
@@ -2089,31 +2194,53 @@ func (cx *EvalContext) txnFieldToStack(txn *transactions.Transaction, fs txnFiel
sv.Uint = boolToUint(txn.AssetFrozen)
case ExtraProgramPages:
sv.Uint = uint64(txn.ExtraProgramPages)
+
+ case Logs:
+ if arrayFieldIdx >= uint64(len(stxn.EvalDelta.Logs)) {
+ err = fmt.Errorf("invalid Logs index %d", arrayFieldIdx)
+ return
+ }
+ sv.Bytes = nilToEmpty([]byte(stxn.EvalDelta.Logs[arrayFieldIdx]))
+ case NumLogs:
+ sv.Uint = uint64(len(stxn.EvalDelta.Logs))
+ case CreatedAssetID:
+ sv.Uint = uint64(stxn.ApplyData.ConfigAsset)
+ case CreatedApplicationID:
+ sv.Uint = uint64(stxn.ApplyData.ApplicationID)
+
default:
- err = fmt.Errorf("invalid txn field %d", fs.field)
+ err = fmt.Errorf("invalid txn field %s", fs.field)
return
}
- txnFieldType := TxnFieldTypes[fs.field]
- if !typecheck(txnFieldType, sv.argType()) {
- err = fmt.Errorf("%s expected field type is %s but got %s", fs.field.String(), txnFieldType.String(), sv.argType().String())
+ if !typecheck(fs.ftype, sv.argType()) {
+ err = fmt.Errorf("%s expected field type is %s but got %s", fs.field, fs.ftype, sv.argType())
}
return
}
-func opTxn(cx *EvalContext) {
- field := TxnField(cx.program[cx.pc+1])
+func (cx *EvalContext) fetchField(field TxnField, expectArray bool) (*txnFieldSpec, error) {
fs, ok := txnFieldSpecByField[field]
if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid txn field %d", field)
- return
+ return nil, fmt.Errorf("invalid txn field %d", field)
}
- _, ok = txnaFieldSpecByField[field]
- if ok {
- cx.err = fmt.Errorf("invalid txn field %d", field)
+ if expectArray != fs.array {
+ if expectArray {
+ return nil, fmt.Errorf("unsupported array field %d", field)
+ }
+ return nil, fmt.Errorf("invalid txn field %d", field)
+ }
+ return &fs, nil
+}
+
+func opTxn(cx *EvalContext) {
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+1]), false)
+ if err != nil {
+ cx.err = err
return
}
- sv, err := cx.txnFieldToStack(&cx.Txn.Txn, fs, 0, cx.GroupIndex)
+
+ sv, err := cx.txnFieldToStack(cx.Txn, fs, 0, cx.GroupIndex, false)
if err != nil {
cx.err = err
return
@@ -2122,19 +2249,14 @@ func opTxn(cx *EvalContext) {
}
func opTxna(cx *EvalContext) {
- field := TxnField(cx.program[cx.pc+1])
- fs, ok := txnFieldSpecByField[field]
- if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid txn field %d", field)
- return
- }
- _, ok = txnaFieldSpecByField[field]
- if !ok {
- cx.err = fmt.Errorf("txna unsupported field %d", field)
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+1]), true)
+ if err != nil {
+ cx.err = err
return
}
+
arrayFieldIdx := uint64(cx.program[cx.pc+2])
- sv, err := cx.txnFieldToStack(&cx.Txn.Txn, fs, arrayFieldIdx, cx.GroupIndex)
+ sv, err := cx.txnFieldToStack(cx.Txn, fs, arrayFieldIdx, cx.GroupIndex, false)
if err != nil {
cx.err = err
return
@@ -2143,21 +2265,15 @@ func opTxna(cx *EvalContext) {
}
func opTxnas(cx *EvalContext) {
- last := len(cx.stack) - 1
-
- field := TxnField(cx.program[cx.pc+1])
- fs, ok := txnFieldSpecByField[field]
- if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid txn field %d", field)
- return
- }
- _, ok = txnaFieldSpecByField[field]
- if !ok {
- cx.err = fmt.Errorf("txnas unsupported field %d", field)
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+1]), true)
+ if err != nil {
+ cx.err = err
return
}
+
+ last := len(cx.stack) - 1
arrayFieldIdx := cx.stack[last].Uint
- sv, err := cx.txnFieldToStack(&cx.Txn.Txn, fs, arrayFieldIdx, cx.GroupIndex)
+ sv, err := cx.txnFieldToStack(cx.Txn, fs, arrayFieldIdx, cx.GroupIndex, false)
if err != nil {
cx.err = err
return
@@ -2166,58 +2282,40 @@ func opTxnas(cx *EvalContext) {
}
func opGtxn(cx *EvalContext) {
- gtxid := cx.program[cx.pc+1]
- if int(gtxid) >= len(cx.TxnGroup) {
- cx.err = fmt.Errorf("gtxn lookup TxnGroup[%d] but it only has %d", gtxid, len(cx.TxnGroup))
+ gi := cx.program[cx.pc+1]
+ if int(gi) >= len(cx.TxnGroup) {
+ cx.err = fmt.Errorf("gtxn lookup TxnGroup[%d] but it only has %d", gi, len(cx.TxnGroup))
return
}
- tx := &cx.TxnGroup[gtxid].Txn
- field := TxnField(cx.program[cx.pc+2])
- fs, ok := txnFieldSpecByField[field]
- if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid txn field %d", field)
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+2]), false)
+ if err != nil {
+ cx.err = err
return
}
- _, ok = txnaFieldSpecByField[field]
- if ok {
- cx.err = fmt.Errorf("invalid txn field %d", field)
+
+ tx := &cx.TxnGroup[gi]
+ sv, err := cx.txnFieldToStack(tx, fs, 0, int(gi), false)
+ if err != nil {
+ cx.err = err
return
}
- var sv stackValue
- var err error
- if field == GroupIndex {
- // GroupIndex; asking this when we just specified it is _dumb_, but oh well
- sv.Uint = uint64(gtxid)
- } else {
- sv, err = cx.txnFieldToStack(tx, fs, 0, uint64(gtxid))
- if err != nil {
- cx.err = err
- return
- }
- }
cx.stack = append(cx.stack, sv)
}
func opGtxna(cx *EvalContext) {
- gtxid := int(uint(cx.program[cx.pc+1]))
- if gtxid >= len(cx.TxnGroup) {
- cx.err = fmt.Errorf("gtxna lookup TxnGroup[%d] but it only has %d", gtxid, len(cx.TxnGroup))
+ gi := cx.program[cx.pc+1]
+ if int(gi) >= len(cx.TxnGroup) {
+ cx.err = fmt.Errorf("gtxna lookup TxnGroup[%d] but it only has %d", gi, len(cx.TxnGroup))
return
}
- tx := &cx.TxnGroup[gtxid].Txn
- field := TxnField(cx.program[cx.pc+2])
- fs, ok := txnFieldSpecByField[field]
- if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid txn field %d", field)
- return
- }
- _, ok = txnaFieldSpecByField[field]
- if !ok {
- cx.err = fmt.Errorf("gtxna unsupported field %d", field)
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+2]), true)
+ if err != nil {
+ cx.err = err
return
}
arrayFieldIdx := uint64(cx.program[cx.pc+3])
- sv, err := cx.txnFieldToStack(tx, fs, arrayFieldIdx, uint64(gtxid))
+ tx := &cx.TxnGroup[gi]
+ sv, err := cx.txnFieldToStack(tx, fs, arrayFieldIdx, int(gi), false)
if err != nil {
cx.err = err
return
@@ -2228,25 +2326,19 @@ func opGtxna(cx *EvalContext) {
func opGtxnas(cx *EvalContext) {
last := len(cx.stack) - 1
- gtxid := cx.program[cx.pc+1]
- if int(gtxid) >= len(cx.TxnGroup) {
- cx.err = fmt.Errorf("gtxnas lookup TxnGroup[%d] but it only has %d", gtxid, len(cx.TxnGroup))
+ gi := int(cx.program[cx.pc+1])
+ if int(gi) >= len(cx.TxnGroup) {
+ cx.err = fmt.Errorf("gtxnas lookup TxnGroup[%d] but it only has %d", gi, len(cx.TxnGroup))
return
}
- tx := &cx.TxnGroup[gtxid].Txn
- field := TxnField(cx.program[cx.pc+2])
- fs, ok := txnFieldSpecByField[field]
- if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid txn field %d", field)
- return
- }
- _, ok = txnaFieldSpecByField[field]
- if !ok {
- cx.err = fmt.Errorf("gtxnas unsupported field %d", field)
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+2]), true)
+ if err != nil {
+ cx.err = err
return
}
arrayFieldIdx := cx.stack[last].Uint
- sv, err := cx.txnFieldToStack(tx, fs, arrayFieldIdx, uint64(gtxid))
+ tx := &cx.TxnGroup[gi]
+ sv, err := cx.txnFieldToStack(tx, fs, arrayFieldIdx, gi, false)
if err != nil {
cx.err = err
return
@@ -2256,59 +2348,40 @@ func opGtxnas(cx *EvalContext) {
func opGtxns(cx *EvalContext) {
last := len(cx.stack) - 1
- gtxid := cx.stack[last].Uint
- if gtxid >= uint64(len(cx.TxnGroup)) {
- cx.err = fmt.Errorf("gtxns lookup TxnGroup[%d] but it only has %d", gtxid, len(cx.TxnGroup))
+ gi := cx.stack[last].Uint
+ if gi >= uint64(len(cx.TxnGroup)) {
+ cx.err = fmt.Errorf("gtxns lookup TxnGroup[%d] but it only has %d", gi, len(cx.TxnGroup))
return
}
- tx := &cx.TxnGroup[gtxid].Txn
- field := TxnField(cx.program[cx.pc+1])
- fs, ok := txnFieldSpecByField[field]
- if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid txn field %d", field)
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+1]), false)
+ if err != nil {
+ cx.err = err
return
}
- _, ok = txnaFieldSpecByField[field]
- if ok {
- cx.err = fmt.Errorf("invalid txn field %d", field)
+ tx := &cx.TxnGroup[gi]
+ sv, err := cx.txnFieldToStack(tx, fs, 0, int(gi), false)
+ if err != nil {
+ cx.err = err
return
}
- var sv stackValue
- var err error
- if field == GroupIndex {
- // GroupIndex; asking this when we just specified it is _dumb_, but oh well
- sv.Uint = gtxid
- } else {
- sv, err = cx.txnFieldToStack(tx, fs, 0, gtxid)
- if err != nil {
- cx.err = err
- return
- }
- }
cx.stack[last] = sv
}
func opGtxnsa(cx *EvalContext) {
last := len(cx.stack) - 1
- gtxid := cx.stack[last].Uint
- if gtxid >= uint64(len(cx.TxnGroup)) {
- cx.err = fmt.Errorf("gtxnsa lookup TxnGroup[%d] but it only has %d", gtxid, len(cx.TxnGroup))
- return
- }
- tx := &cx.TxnGroup[gtxid].Txn
- field := TxnField(cx.program[cx.pc+1])
- fs, ok := txnFieldSpecByField[field]
- if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid txn field %d", field)
+ gi := cx.stack[last].Uint
+ if gi >= uint64(len(cx.TxnGroup)) {
+ cx.err = fmt.Errorf("gtxnsa lookup TxnGroup[%d] but it only has %d", gi, len(cx.TxnGroup))
return
}
- _, ok = txnaFieldSpecByField[field]
- if !ok {
- cx.err = fmt.Errorf("gtxnsa unsupported field %d", field)
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+1]), true)
+ if err != nil {
+ cx.err = err
return
}
arrayFieldIdx := uint64(cx.program[cx.pc+2])
- sv, err := cx.txnFieldToStack(tx, fs, arrayFieldIdx, gtxid)
+ tx := &cx.TxnGroup[gi]
+ sv, err := cx.txnFieldToStack(tx, fs, arrayFieldIdx, int(gi), false)
if err != nil {
cx.err = err
return
@@ -2320,25 +2393,19 @@ func opGtxnsas(cx *EvalContext) {
last := len(cx.stack) - 1
prev := last - 1
- gtxid := cx.stack[prev].Uint
- if gtxid >= uint64(len(cx.TxnGroup)) {
- cx.err = fmt.Errorf("gtxnsas lookup TxnGroup[%d] but it only has %d", gtxid, len(cx.TxnGroup))
+ gi := cx.stack[prev].Uint
+ if gi >= uint64(len(cx.TxnGroup)) {
+ cx.err = fmt.Errorf("gtxnsas lookup TxnGroup[%d] but it only has %d", gi, len(cx.TxnGroup))
return
}
- tx := &cx.TxnGroup[gtxid].Txn
- field := TxnField(cx.program[cx.pc+1])
- fs, ok := txnFieldSpecByField[field]
- if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid txn field %d", field)
- return
- }
- _, ok = txnaFieldSpecByField[field]
- if !ok {
- cx.err = fmt.Errorf("gtxnsas unsupported field %d", field)
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+1]), true)
+ if err != nil {
+ cx.err = err
return
}
arrayFieldIdx := cx.stack[last].Uint
- sv, err := cx.txnFieldToStack(tx, fs, arrayFieldIdx, gtxid)
+ tx := &cx.TxnGroup[gi]
+ sv, err := cx.txnFieldToStack(tx, fs, arrayFieldIdx, int(gi), false)
if err != nil {
cx.err = err
return
@@ -2348,25 +2415,41 @@ func opGtxnsas(cx *EvalContext) {
}
func opItxn(cx *EvalContext) {
- field := TxnField(cx.program[cx.pc+1])
- fs, ok := txnFieldSpecByField[field]
- if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid itxn field %d", field)
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+1]), false)
+ if err != nil {
+ cx.err = err
return
}
- _, ok = txnaFieldSpecByField[field]
- if ok {
- cx.err = fmt.Errorf("invalid itxn field %d", field)
+
+ if len(cx.Txn.EvalDelta.InnerTxns) == 0 {
+ cx.err = fmt.Errorf("no inner transaction available %d", fs.field)
return
}
- if len(cx.InnerTxns) == 0 {
- cx.err = fmt.Errorf("no inner transaction available %d", field)
+ itxn := &cx.Txn.EvalDelta.InnerTxns[len(cx.Txn.EvalDelta.InnerTxns)-1]
+ sv, err := cx.txnFieldToStack(itxn, fs, 0, 0, true)
+ if err != nil {
+ cx.err = err
return
}
+ cx.stack = append(cx.stack, sv)
+}
- itxn := &cx.InnerTxns[len(cx.InnerTxns)-1]
- sv, err := cx.itxnFieldToStack(itxn, fs, 0)
+func opItxna(cx *EvalContext) {
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+1]), true)
+ if err != nil {
+ cx.err = err
+ return
+ }
+ arrayFieldIdx := uint64(cx.program[cx.pc+2])
+
+ if len(cx.Txn.EvalDelta.InnerTxns) == 0 {
+ cx.err = fmt.Errorf("no inner transaction available %d", fs.field)
+ return
+ }
+
+ itxn := &cx.Txn.EvalDelta.InnerTxns[len(cx.Txn.EvalDelta.InnerTxns)-1]
+ sv, err := cx.txnFieldToStack(itxn, fs, arrayFieldIdx, 0, true)
if err != nil {
cx.err = err
return
@@ -2374,27 +2457,66 @@ func opItxn(cx *EvalContext) {
cx.stack = append(cx.stack, sv)
}
-func opItxna(cx *EvalContext) {
- field := TxnField(cx.program[cx.pc+1])
- fs, ok := txnFieldSpecByField[field]
- if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid itxn field %d", field)
+func (cx *EvalContext) getLastInnerGroup() []transactions.SignedTxnWithAD {
+ inners := cx.Txn.EvalDelta.InnerTxns
+ // If there are no inners yet, return empty slice, which will result in error
+ if len(inners) == 0 {
+ return inners
+ }
+ gid := inners[len(inners)-1].Txn.Group
+ // If last inner was a singleton, return it as a slice.
+ if gid.IsZero() {
+ return inners[len(inners)-1:]
+ }
+ // Look back for the first non-matching inner (by group) to find beginning
+ for i := len(inners) - 2; i >= 0; i-- {
+ if inners[i].Txn.Group != gid {
+ return inners[i+1:]
+ }
+ }
+ // All have the same (non-zero) group. Return all
+ return inners
+}
+
+func opGitxn(cx *EvalContext) {
+ lastInnerGroup := cx.getLastInnerGroup()
+ gi := cx.program[cx.pc+1]
+ if int(gi) >= len(lastInnerGroup) {
+ cx.err = fmt.Errorf("gitxn %d ... but last group has %d", gi, len(lastInnerGroup))
return
}
- _, ok = txnaFieldSpecByField[field]
- if !ok {
- cx.err = fmt.Errorf("itxna unsupported field %d", field)
+ itxn := &lastInnerGroup[gi]
+
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+2]), false)
+ if err != nil {
+ cx.err = err
return
}
- arrayFieldIdx := uint64(cx.program[cx.pc+2])
- if len(cx.InnerTxns) == 0 {
- cx.err = fmt.Errorf("no inner transaction available %d", field)
+ sv, err := cx.txnFieldToStack(itxn, fs, 0, int(gi), true)
+ if err != nil {
+ cx.err = err
+ return
+ }
+ cx.stack = append(cx.stack, sv)
+}
+
+func opGitxna(cx *EvalContext) {
+ lastInnerGroup := cx.getLastInnerGroup()
+ gi := int(uint(cx.program[cx.pc+1]))
+ if gi >= len(lastInnerGroup) {
+ cx.err = fmt.Errorf("gitxna %d ... but last group has %d", gi, len(lastInnerGroup))
return
}
+ itxn := &lastInnerGroup[gi]
- itxn := &cx.InnerTxns[len(cx.InnerTxns)-1]
- sv, err := cx.itxnFieldToStack(itxn, fs, arrayFieldIdx)
+ fs, err := cx.fetchField(TxnField(cx.program[cx.pc+2]), true)
+ if err != nil {
+ cx.err = err
+ return
+ }
+ arrayFieldIdx := uint64(cx.program[cx.pc+3])
+ sv, err := cx.txnFieldToStack(itxn, fs, arrayFieldIdx, gi, true)
if err != nil {
cx.err = err
return
@@ -2402,8 +2524,8 @@ func opItxna(cx *EvalContext) {
cx.stack = append(cx.stack, sv)
}
-func opGaidImpl(cx *EvalContext, groupIdx uint64, opName string) (sv stackValue, err error) {
- if groupIdx >= uint64(len(cx.TxnGroup)) {
+func opGaidImpl(cx *EvalContext, groupIdx int, opName string) (sv stackValue, err error) {
+ if groupIdx >= len(cx.TxnGroup) {
err = fmt.Errorf("%s lookup TxnGroup[%d] but it only has %d", opName, groupIdx, len(cx.TxnGroup))
return
} else if groupIdx > cx.GroupIndex {
@@ -2430,8 +2552,8 @@ func opGaidImpl(cx *EvalContext, groupIdx uint64, opName string) (sv stackValue,
}
func opGaid(cx *EvalContext) {
- groupIdx := cx.program[cx.pc+1]
- sv, err := opGaidImpl(cx, uint64(groupIdx), "gaid")
+ groupIdx := int(cx.program[cx.pc+1])
+ sv, err := opGaidImpl(cx, groupIdx, "gaid")
if err != nil {
cx.err = err
return
@@ -2442,8 +2564,12 @@ func opGaid(cx *EvalContext) {
func opGaids(cx *EvalContext) {
last := len(cx.stack) - 1
- groupIdx := cx.stack[last].Uint
- sv, err := opGaidImpl(cx, groupIdx, "gaids")
+ gi := cx.stack[last].Uint
+ if gi >= uint64(len(cx.TxnGroup)) {
+ cx.err = fmt.Errorf("gaids lookup TxnGroup[%d] but it only has %d", gi, len(cx.TxnGroup))
+ return
+ }
+ sv, err := opGaidImpl(cx, int(gi), "gaids")
if err != nil {
cx.err = err
return
@@ -2473,61 +2599,41 @@ func (cx *EvalContext) getLatestTimestamp() (timestamp uint64, err error) {
return uint64(ts), nil
}
-func (cx *EvalContext) getApplicationID() (uint64, error) {
- if cx.Ledger == nil {
- return 0, fmt.Errorf("ledger not available")
- }
- return uint64(cx.Ledger.ApplicationID()), nil
-}
-
-func (cx *EvalContext) getApplicationAddress() (basics.Address, error) {
- if cx.Ledger == nil {
- return basics.Address{}, fmt.Errorf("ledger not available")
- }
-
- // Initialize appAddrCache if necessary
- if cx.appAddrCache == nil {
- cx.appAddrCache = make(map[basics.AppIndex]basics.Address)
- }
-
- appID := cx.Ledger.ApplicationID()
- // Hashes are expensive, so we cache computed app addrs
- appAddr, ok := cx.appAddrCache[appID]
+// getApplicationAddress memoizes app.Address() across a tx group's evaluation
+func (cx *EvalContext) getApplicationAddress(app basics.AppIndex) basics.Address {
+ /* Do not instantiate the cache here, that would mask a programming error.
+ The cache must be instantiated at EvalParams construction time, so that
+ proper sharing with inner EvalParams can work. */
+ appAddr, ok := cx.appAddrCache[app]
if !ok {
- appAddr = appID.Address()
- cx.appAddrCache[appID] = appAddr
+ appAddr = app.Address()
+ cx.appAddrCache[app] = appAddr
}
- return appAddr, nil
+ return appAddr
}
-func (cx *EvalContext) getCreatableID(groupIndex uint64) (cid uint64, err error) {
- if cx.Ledger == nil {
- err = fmt.Errorf("ledger not available")
- return
+func (cx *EvalContext) getCreatableID(groupIndex int) (cid uint64, err error) {
+ if aid := cx.TxnGroup[groupIndex].ApplyData.ConfigAsset; aid != 0 {
+ return uint64(aid), nil
}
- gi := int(groupIndex)
- if gi < 0 {
- return 0, fmt.Errorf("groupIndex %d too high", groupIndex)
+ if aid := cx.TxnGroup[groupIndex].ApplyData.ApplicationID; aid != 0 {
+ return uint64(aid), nil
}
- return uint64(cx.Ledger.GetCreatableID(gi)), nil
+ return 0, fmt.Errorf("Index %d did not create anything", groupIndex)
}
func (cx *EvalContext) getCreatorAddress() ([]byte, error) {
if cx.Ledger == nil {
return nil, fmt.Errorf("ledger not available")
}
- _, creator, err := cx.Ledger.AppParams(cx.Ledger.ApplicationID())
+ _, creator, err := cx.Ledger.AppParams(cx.appID)
if err != nil {
return nil, fmt.Errorf("No params for current app")
}
return creator[:], nil
}
-func (cx *EvalContext) getGroupID() []byte {
- return cx.Txn.Txn.Group[:]
-}
-
var zeroAddress basics.Address
func (cx *EvalContext) globalFieldToValue(fs globalFieldSpec) (sv stackValue, err error) {
@@ -2549,15 +2655,29 @@ func (cx *EvalContext) globalFieldToValue(fs globalFieldSpec) (sv stackValue, er
case LatestTimestamp:
sv.Uint, err = cx.getLatestTimestamp()
case CurrentApplicationID:
- sv.Uint, err = cx.getApplicationID()
+ sv.Uint = uint64(cx.appID)
case CurrentApplicationAddress:
- var addr basics.Address
- addr, err = cx.getApplicationAddress()
+ addr := cx.getApplicationAddress(cx.appID)
sv.Bytes = addr[:]
case CreatorAddress:
sv.Bytes, err = cx.getCreatorAddress()
case GroupID:
- sv.Bytes = cx.getGroupID()
+ sv.Bytes = cx.Txn.Txn.Group[:]
+ case OpcodeBudget:
+ sv.Uint = uint64(cx.budget() - cx.cost)
+ case CallerApplicationID:
+ if cx.caller != nil {
+ sv.Uint = uint64(cx.caller.appID)
+ } else {
+ sv.Uint = 0
+ }
+ case CallerApplicationAddress:
+ if cx.caller != nil {
+ addr := cx.caller.getApplicationAddress(cx.caller.appID)
+ sv.Bytes = addr[:]
+ } else {
+ sv.Bytes = zeroAddress[:]
+ }
default:
err = fmt.Errorf("invalid global field %d", fs.field)
}
@@ -2573,11 +2693,11 @@ func opGlobal(cx *EvalContext) {
globalField := GlobalField(cx.program[cx.pc+1])
fs, ok := globalFieldSpecByField[globalField]
if !ok || fs.version > cx.version {
- cx.err = fmt.Errorf("invalid global field %d", globalField)
+ cx.err = fmt.Errorf("invalid global field %s", globalField)
return
}
if (cx.runModeFlags & fs.mode) == 0 {
- cx.err = fmt.Errorf("global[%d] not allowed in current mode", globalField)
+ cx.err = fmt.Errorf("global[%s] not allowed in current mode", globalField)
return
}
@@ -2631,11 +2751,7 @@ func opEd25519verify(cx *EvalContext) {
copy(sig[:], cx.stack[prev].Bytes)
msg := Msg{ProgramHash: cx.programHash(), Data: cx.stack[pprev].Bytes}
- if sv.Verify(msg, sig) {
- cx.stack[pprev].Uint = 1
- } else {
- cx.stack[pprev].Uint = 0
- }
+ cx.stack[pprev].Uint = boolToUint(sv.Verify(msg, sig))
cx.stack[pprev].Bytes = nil
cx.stack = cx.stack[:prev]
}
@@ -2690,11 +2806,7 @@ func opEcdsaVerify(cx *EvalContext) {
result := secp256k1.VerifySignature(pubkey, msg, signature)
- if result {
- cx.stack[fifth].Uint = 1
- } else {
- cx.stack[fifth].Uint = 0
- }
+ cx.stack[fifth].Uint = boolToUint(result)
cx.stack[fifth].Bytes = nil
cx.stack = cx.stack[:fourth]
}
@@ -2832,30 +2944,29 @@ func opStores(cx *EvalContext) {
cx.stack = cx.stack[:prev]
}
-func opGloadImpl(cx *EvalContext, groupIdx uint64, scratchIdx byte, opName string) (scratchValue stackValue, err error) {
- if groupIdx >= uint64(len(cx.TxnGroup)) {
- err = fmt.Errorf("%s lookup TxnGroup[%d] but it only has %d", opName, groupIdx, len(cx.TxnGroup))
- return
- } else if int(scratchIdx) >= len(cx.scratch) {
- err = fmt.Errorf("invalid Scratch index %d", scratchIdx)
- return
- } else if txn := cx.TxnGroup[groupIdx].Txn; txn.Type != protocol.ApplicationCallTx {
- err = fmt.Errorf("can't use %s on non-app call txn with index %d", opName, groupIdx)
- return
- } else if groupIdx == cx.GroupIndex {
- err = fmt.Errorf("can't use %s on self, use load instead", opName)
- return
- } else if groupIdx > cx.GroupIndex {
- err = fmt.Errorf("%s can't get future scratch space from txn with index %d", opName, groupIdx)
- return
+func opGloadImpl(cx *EvalContext, groupIdx int, scratchIdx byte, opName string) (stackValue, error) {
+ var none stackValue
+ if groupIdx >= len(cx.TxnGroup) {
+ return none, fmt.Errorf("%s lookup TxnGroup[%d] but it only has %d", opName, groupIdx, len(cx.TxnGroup))
+ }
+ if int(scratchIdx) >= len(cx.scratch) {
+ return none, fmt.Errorf("invalid Scratch index %d", scratchIdx)
+ }
+ if cx.TxnGroup[groupIdx].Txn.Type != protocol.ApplicationCallTx {
+ return none, fmt.Errorf("can't use %s on non-app call txn with index %d", opName, groupIdx)
+ }
+ if groupIdx == cx.GroupIndex {
+ return none, fmt.Errorf("can't use %s on self, use load instead", opName)
+ }
+ if groupIdx > cx.GroupIndex {
+ return none, fmt.Errorf("%s can't get future scratch space from txn with index %d", opName, groupIdx)
}
- scratchValue = cx.PastSideEffects[groupIdx].getScratchValue(scratchIdx)
- return
+ return cx.pastScratch[groupIdx][scratchIdx], nil
}
func opGload(cx *EvalContext) {
- groupIdx := uint64(cx.program[cx.pc+1])
+ groupIdx := int(cx.program[cx.pc+1])
scratchIdx := cx.program[cx.pc+2]
scratchValue, err := opGloadImpl(cx, groupIdx, scratchIdx, "gload")
if err != nil {
@@ -2868,9 +2979,13 @@ func opGload(cx *EvalContext) {
func opGloads(cx *EvalContext) {
last := len(cx.stack) - 1
- groupIdx := cx.stack[last].Uint
+ gi := cx.stack[last].Uint
+ if gi >= uint64(len(cx.TxnGroup)) {
+ cx.err = fmt.Errorf("gloads lookup TxnGroup[%d] but it only has %d", gi, len(cx.TxnGroup))
+ return
+ }
scratchIdx := cx.program[cx.pc+1]
- scratchValue, err := opGloadImpl(cx, groupIdx, scratchIdx, "gloads")
+ scratchValue, err := opGloadImpl(cx, int(gi), scratchIdx, "gloads")
if err != nil {
cx.err = err
return
@@ -2879,6 +2994,30 @@ func opGloads(cx *EvalContext) {
cx.stack[last] = scratchValue
}
+func opGloadss(cx *EvalContext) {
+ last := len(cx.stack) - 1
+ prev := last - 1
+
+ gi := cx.stack[prev].Uint
+ if gi >= uint64(len(cx.TxnGroup)) {
+ cx.err = fmt.Errorf("gloads lookup TxnGroup[%d] but it only has %d", gi, len(cx.TxnGroup))
+ return
+ }
+ scratchIdx := cx.stack[last].Uint
+ if scratchIdx >= 256 {
+ cx.err = fmt.Errorf("gloadss scratch index >= 256 (%d)", scratchIdx)
+ return
+ }
+ scratchValue, err := opGloadImpl(cx, int(gi), byte(scratchIdx), "gloadss")
+ if err != nil {
+ cx.err = err
+ return
+ }
+
+ cx.stack[prev] = scratchValue
+ cx.stack = cx.stack[:last]
+}
+
func opConcat(cx *EvalContext) {
last := len(cx.stack) - 1
prev := last - 1
@@ -3122,14 +3261,19 @@ func opExtract64Bits(cx *EvalContext) {
opExtractNBytes(cx, 8) // extract 8 bytes
}
-// accountReference yields the address and Accounts offset designated
-// by a stackValue. If the stackValue is the app account, it need not
-// be in the Accounts array, therefore len(Accounts) + 1 is returned
-// as the index. This unusual convention is based on the existing
-// convention that 0 is the sender, 1-len(Accounts) are indexes into
-// Accounts array, and so len+1 is the next available value. This
-// will allow encoding into EvalDelta efficiently when it becomes
-// necessary (when apps change local state on their own account).
+// accountReference yields the address and Accounts offset designated by a
+// stackValue. If the stackValue is the app account or an account of an app in
+// created.apps, and it is not be in the Accounts array, then len(Accounts) + 1
+// is returned as the index. This would let us catch the mistake if the index is
+// used for set/del. If the txn somehow "psychically" predicted the address, and
+// therefore it IS in txn.Accounts, then happy day, we can set/del it. Return
+// the proper index.
+
+// If we ever want apps to be able to change local state on these accounts
+// (which includes this app's own account!), we will need a change to
+// EvalDelta's on disk format, so that the addr can be encoded explicitly rather
+// than by index into txn.Accounts.
+
func (cx *EvalContext) accountReference(account stackValue) (basics.Address, uint64, error) {
if account.argType() == StackUint64 {
addr, err := cx.Txn.Txn.AddressByIndex(account.Uint, cx.Txn.Txn.Sender)
@@ -3141,17 +3285,40 @@ func (cx *EvalContext) accountReference(account stackValue) (basics.Address, uin
}
idx, err := cx.Txn.Txn.IndexByAddress(addr, cx.Txn.Txn.Sender)
+ invalidIndex := uint64(len(cx.Txn.Txn.Accounts) + 1)
+ // Allow an address for an app that was created in group
+ if err != nil && cx.version >= createdResourcesVersion {
+ for _, appID := range cx.created.apps {
+ createdAddress := cx.getApplicationAddress(appID)
+ if addr == createdAddress {
+ return addr, invalidIndex, nil
+ }
+ }
+ }
+
+ // this app's address is also allowed
if err != nil {
- // Application address is acceptable. index is meaningless though
- appAddr, _ := cx.getApplicationAddress()
+ appAddr := cx.getApplicationAddress(cx.appID)
if appAddr == addr {
- return addr, uint64(len(cx.Txn.Txn.Accounts) + 1), nil
+ return addr, invalidIndex, nil
}
}
return addr, idx, err
}
+func (cx *EvalContext) mutableAccountReference(account stackValue) (basics.Address, uint64, error) {
+ addr, accountIdx, err := cx.accountReference(account)
+ if err == nil && accountIdx > uint64(len(cx.Txn.Txn.Accounts)) {
+ // There was no error, but accountReference has signaled that accountIdx
+ // is not for mutable ops (because it can't encode it in EvalDelta)
+ // This also tells us that account.address() will work.
+ addr, _ := account.address()
+ err = fmt.Errorf("invalid Account reference for mutation %s", addr)
+ }
+ return addr, accountIdx, err
+}
+
type opQuery func(basics.Address, *config.ConsensusParams) (basics.MicroAlgos, error)
func opBalanceQuery(cx *EvalContext, query opQuery, item string) error {
@@ -3224,12 +3391,8 @@ func opAppOptedIn(cx *EvalContext) {
return
}
+ cx.stack[prev].Uint = boolToUint(optedIn)
cx.stack[prev].Bytes = nil
- if optedIn {
- cx.stack[prev].Uint = 1
- } else {
- cx.stack[prev].Uint = 0
- }
cx.stack = cx.stack[:last]
}
@@ -3372,11 +3535,28 @@ func opAppLocalPut(cx *EvalContext) {
return
}
- addr, accountIdx, err := cx.accountReference(cx.stack[pprev])
- if err == nil {
- err = cx.Ledger.SetLocal(addr, key, sv.toTealValue(), accountIdx)
+ addr, accountIdx, err := cx.mutableAccountReference(cx.stack[pprev])
+ if err != nil {
+ cx.err = err
+ return
}
+ // if writing the same value, do nothing, matching ledger behavior with
+ // previous BuildEvalDelta mechanism
+ etv, ok, err := cx.Ledger.GetLocal(addr, cx.appID, key, accountIdx)
+ if err != nil {
+ cx.err = err
+ return
+ }
+
+ tv := sv.toTealValue()
+ if !ok || tv != etv {
+ if _, ok := cx.Txn.EvalDelta.LocalDeltas[accountIdx]; !ok {
+ cx.Txn.EvalDelta.LocalDeltas[accountIdx] = basics.StateDelta{}
+ }
+ cx.Txn.EvalDelta.LocalDeltas[accountIdx][key] = tv.ToValueDelta()
+ }
+ err = cx.Ledger.SetLocal(addr, cx.appID, key, tv, accountIdx)
if err != nil {
cx.err = err
return
@@ -3397,7 +3577,19 @@ func opAppGlobalPut(cx *EvalContext) {
return
}
- err := cx.Ledger.SetGlobal(key, sv.toTealValue())
+ // if writing the same value, do nothing, matching ledger behavior with
+ // previous BuildEvalDelta mechanism
+ etv, ok, err := cx.Ledger.GetGlobal(cx.appID, key)
+ if err != nil {
+ cx.err = err
+ return
+ }
+ tv := sv.toTealValue()
+ if !ok || tv != etv {
+ cx.Txn.EvalDelta.GlobalDelta[key] = tv.ToValueDelta()
+ }
+
+ err = cx.Ledger.SetGlobal(cx.appID, key, tv)
if err != nil {
cx.err = err
return
@@ -3417,9 +3609,15 @@ func opAppLocalDel(cx *EvalContext) {
return
}
- addr, accountIdx, err := cx.accountReference(cx.stack[prev])
+ addr, accountIdx, err := cx.mutableAccountReference(cx.stack[prev])
if err == nil {
- err = cx.Ledger.DelLocal(addr, key, accountIdx)
+ if _, ok := cx.Txn.EvalDelta.LocalDeltas[accountIdx]; !ok {
+ cx.Txn.EvalDelta.LocalDeltas[accountIdx] = basics.StateDelta{}
+ }
+ cx.Txn.EvalDelta.LocalDeltas[accountIdx][key] = basics.ValueDelta{
+ Action: basics.DeleteAction,
+ }
+ err = cx.Ledger.DelLocal(addr, cx.appID, key, accountIdx)
}
if err != nil {
cx.err = err
@@ -3439,7 +3637,10 @@ func opAppGlobalDel(cx *EvalContext) {
return
}
- err := cx.Ledger.DelGlobal(key)
+ cx.Txn.EvalDelta.GlobalDelta[key] = basics.ValueDelta{
+ Action: basics.DeleteAction,
+ }
+ err := cx.Ledger.DelGlobal(cx.appID, key)
if err != nil {
cx.err = err
return
@@ -3457,7 +3658,7 @@ func opAppGlobalDel(cx *EvalContext) {
func appReference(cx *EvalContext, ref uint64, foreign bool) (basics.AppIndex, error) {
if cx.version >= directRefEnabledVersion {
if ref == 0 {
- return cx.Ledger.ApplicationID(), nil
+ return cx.appID, nil
}
if ref <= uint64(len(cx.Txn.Txn.ForeignApps)) {
return basics.AppIndex(cx.Txn.Txn.ForeignApps[ref-1]), nil
@@ -3467,22 +3668,29 @@ func appReference(cx *EvalContext, ref uint64, foreign bool) (basics.AppIndex, e
return appID, nil
}
}
- // It should be legal to use your own app id, which
- // can't be in ForeignApps during creation, because it
- // is unknown then. But it can be discovered in the
- // app code. It's tempting to combine this with the
- // == 0 test, above, but it must come after the check
- // for being below len(ForeignApps)
- if ref == uint64(cx.Ledger.ApplicationID()) {
- return cx.Ledger.ApplicationID(), nil
+ // or was created in group
+ if cx.version >= createdResourcesVersion {
+ for _, appID := range cx.created.apps {
+ if appID == basics.AppIndex(ref) {
+ return appID, nil
+ }
+ }
+ }
+ // It should be legal to use your own app id, which can't be in
+ // ForeignApps during creation, because it is unknown then. But it can
+ // be discovered in the app code. It's tempting to combine this with
+ // the == 0 test, above, but it must come after the check for being
+ // below len(ForeignApps)
+ if ref == uint64(cx.appID) {
+ return cx.appID, nil
}
} else {
// Old rules
+ if ref == 0 {
+ return cx.appID, nil
+ }
if foreign {
// In old versions, a foreign reference must be an index in ForeignAssets or 0
- if ref == 0 {
- return cx.Ledger.ApplicationID(), nil
- }
if ref <= uint64(len(cx.Txn.Txn.ForeignApps)) {
return basics.AppIndex(cx.Txn.Txn.ForeignApps[ref-1]), nil
}
@@ -3505,6 +3713,14 @@ func asaReference(cx *EvalContext, ref uint64, foreign bool) (basics.AssetIndex,
return assetID, nil
}
}
+ // or was created in group
+ if cx.version >= createdResourcesVersion {
+ for _, assetID := range cx.created.asas {
+ if assetID == basics.AssetIndex(ref) {
+ return assetID, nil
+ }
+ }
+ }
} else {
// Old rules
if foreign {
@@ -3648,10 +3864,66 @@ func opAppParamsGet(cx *EvalContext) {
cx.stack = append(cx.stack, stackValue{Uint: exist})
}
+func opAcctParamsGet(cx *EvalContext) {
+ last := len(cx.stack) - 1 // acct
+
+ if cx.Ledger == nil {
+ cx.err = fmt.Errorf("ledger not available")
+ return
+ }
+
+ addr, _, err := cx.accountReference(cx.stack[last])
+ if err != nil {
+ cx.err = err
+ return
+ }
+
+ paramField := AcctParamsField(cx.program[cx.pc+1])
+ fs, ok := acctParamsFieldSpecByField[paramField]
+ if !ok || fs.version > cx.version {
+ cx.err = fmt.Errorf("invalid acct_params_get field %d", paramField)
+ return
+ }
+
+ bal, err := cx.Ledger.Balance(addr)
+ if err != nil {
+ cx.err = err
+ return
+ }
+ exist := boolToUint(bal.Raw > 0)
+
+ var value stackValue
+
+ switch fs.field {
+ case AcctBalance:
+ value.Uint = bal.Raw
+ case AcctMinBalance:
+ mbal, err := cx.Ledger.MinBalance(addr, cx.Proto)
+ if err != nil {
+ cx.err = err
+ return
+ }
+ value.Uint = mbal.Raw
+ case AcctAuthAddr:
+ auth, err := cx.Ledger.Authorizer(addr)
+ if err != nil {
+ cx.err = err
+ return
+ }
+ if auth == addr {
+ value.Bytes = zeroAddress[:]
+ } else {
+ value.Bytes = auth[:]
+ }
+ }
+ cx.stack[last] = value
+ cx.stack = append(cx.stack, stackValue{Uint: exist})
+}
+
func opLog(cx *EvalContext) {
last := len(cx.stack) - 1
- if len(cx.Logs) == MaxLogCalls {
+ if len(cx.Txn.EvalDelta.Logs) == MaxLogCalls {
cx.err = fmt.Errorf("too many log calls in program. up to %d is allowed", MaxLogCalls)
return
}
@@ -3661,38 +3933,32 @@ func opLog(cx *EvalContext) {
cx.err = fmt.Errorf("program logs too large. %d bytes > %d bytes limit", cx.logSize, MaxLogSize)
return
}
- cx.Logs = append(cx.Logs, string(log.Bytes))
+ cx.Txn.EvalDelta.Logs = append(cx.Txn.EvalDelta.Logs, string(log.Bytes))
cx.stack = cx.stack[:last]
}
func authorizedSender(cx *EvalContext, addr basics.Address) bool {
- appAddr, err := cx.getApplicationAddress()
- if err != nil {
- return false
- }
authorizer, err := cx.Ledger.Authorizer(addr)
if err != nil {
return false
}
- return appAddr == authorizer
+ return cx.getApplicationAddress(cx.appID) == authorizer
}
// addInnerTxn appends a fresh SignedTxn to subtxns, populated with reasonable
// defaults.
func addInnerTxn(cx *EvalContext) error {
- addr, err := cx.getApplicationAddress()
- if err != nil {
- return err
- }
+ addr := cx.getApplicationAddress(cx.appID)
// For compatibility with v5, in which failures only occurred in the submit,
- // we only fail here if we are OVER the MaxInnerTransactions limit. Thus
- // this allows construction of one more Inner than is actually allowed, and
- // will fail in submit. (But we do want the check here, so this can't become
- // unbounded.) The MaxTxGroupSize check can be, and is, precise.
- if len(cx.InnerTxns)+len(cx.subtxns) > cx.Proto.MaxInnerTransactions ||
- len(cx.subtxns) >= cx.Proto.MaxTxGroupSize {
- return errors.New("attempt to create too many inner transactions")
+ // we only fail here if we are already over the max inner limit. Thus this
+ // allows construction of one more Inner than is actually allowed, and will
+ // fail in submit. (But we do want the check here, so this can't become
+ // unbounded.) The MaxTxGroupSize check can be, and is, precise. (That is,
+ // if we are at max group size, we can panic now, since we are trying to add
+ // too many)
+ if len(cx.Txn.EvalDelta.InnerTxns)+len(cx.subtxns) > cx.allowedInners() || len(cx.subtxns) >= cx.Proto.MaxTxGroupSize {
+ return fmt.Errorf("too many inner transactions %d", len(cx.Txn.EvalDelta.InnerTxns)+len(cx.subtxns))
}
stxn := transactions.SignedTxn{}
@@ -3757,25 +4023,67 @@ func (cx *EvalContext) availableAccount(sv stackValue) (basics.Address, error) {
// don't need (or want!) to allow low numbers to represent the asset at that
// index in ForeignAssets array.
func (cx *EvalContext) availableAsset(sv stackValue) (basics.AssetIndex, error) {
- aid, err := sv.uint()
+ uint, err := sv.uint()
if err != nil {
return basics.AssetIndex(0), err
}
+ aid := basics.AssetIndex(uint)
+
// Ensure that aid is in Foreign Assets
for _, assetID := range cx.Txn.Txn.ForeignAssets {
- if assetID == basics.AssetIndex(aid) {
- return basics.AssetIndex(aid), nil
+ if assetID == aid {
+ return aid, nil
+ }
+ }
+ // or was created in group
+ if cx.version >= createdResourcesVersion {
+ for _, assetID := range cx.created.asas {
+ if assetID == aid {
+ return aid, nil
+ }
}
}
+
return basics.AssetIndex(0), fmt.Errorf("invalid Asset reference %d", aid)
}
-func (cx *EvalContext) stackIntoTxnField(sv stackValue, fs txnFieldSpec, txn *transactions.Transaction) (err error) {
+// availableApp is used instead of appReference for more recent (stateful)
+// opcodes that don't need (or want!) to allow low numbers to represent the app
+// at that index in ForeignApps array.
+func (cx *EvalContext) availableApp(sv stackValue) (basics.AppIndex, error) {
+ uint, err := sv.uint()
+ if err != nil {
+ return basics.AppIndex(0), err
+ }
+ aid := basics.AppIndex(uint)
+
+ // Ensure that aid is in Foreign Apps
+ for _, appID := range cx.Txn.Txn.ForeignApps {
+ if appID == aid {
+ return aid, nil
+ }
+ }
+ // or was created in group
+ if cx.version >= createdResourcesVersion {
+ for _, appID := range cx.created.apps {
+ if appID == aid {
+ return aid, nil
+ }
+ }
+ }
+ // Or, it can be the current app
+ if cx.appID == aid {
+ return aid, nil
+ }
+
+ return 0, fmt.Errorf("invalid App reference %d", aid)
+}
+
+func (cx *EvalContext) stackIntoTxnField(sv stackValue, fs *txnFieldSpec, txn *transactions.Transaction) (err error) {
switch fs.field {
case Type:
if sv.Bytes == nil {
- err = fmt.Errorf("Type arg not a byte array")
- return
+ return fmt.Errorf("Type arg not a byte array")
}
txType := string(sv.Bytes)
ver, ok := innerTxnTypes[txType]
@@ -3810,11 +4118,10 @@ func (cx *EvalContext) stackIntoTxnField(sv stackValue, fs txnFieldSpec, txn *tr
// round, and separation by MaxLifetime (check lifetime in submit, not here)
case Note:
if len(sv.Bytes) > cx.Proto.MaxTxnNoteBytes {
- err = fmt.Errorf("%s may not exceed %d bytes", fs.field, cx.Proto.MaxTxnNoteBytes)
- } else {
- txn.Note = make([]byte, len(sv.Bytes))
- copy(txn.Note[:], sv.Bytes)
+ return fmt.Errorf("%s may not exceed %d bytes", fs.field, cx.Proto.MaxTxnNoteBytes)
}
+ txn.Note = make([]byte, len(sv.Bytes))
+ copy(txn.Note, sv.Bytes)
// GenesisID, GenesisHash unsettable: surely makes no sense
// Group unsettable: Can't make groups from AVM (yet?)
// Lease unsettable: This seems potentially useful.
@@ -3825,16 +4132,14 @@ func (cx *EvalContext) stackIntoTxnField(sv stackValue, fs txnFieldSpec, txn *tr
// KeyReg
case VotePK:
if len(sv.Bytes) != 32 {
- err = fmt.Errorf("%s must be 32 bytes", fs.field)
- } else {
- copy(txn.VotePK[:], sv.Bytes)
+ return fmt.Errorf("%s must be 32 bytes", fs.field)
}
+ copy(txn.VotePK[:], sv.Bytes)
case SelectionPK:
if len(sv.Bytes) != 32 {
- err = fmt.Errorf("%s must be 32 bytes", fs.field)
- } else {
- copy(txn.SelectionPK[:], sv.Bytes)
+ return fmt.Errorf("%s must be 32 bytes", fs.field)
}
+ copy(txn.SelectionPK[:], sv.Bytes)
case VoteFirst:
var round uint64
round, err = sv.uint()
@@ -3891,10 +4196,9 @@ func (cx *EvalContext) stackIntoTxnField(sv stackValue, fs txnFieldSpec, txn *tr
txn.AssetParams.URL, err = sv.string(cx.Proto.MaxAssetURLBytes)
case ConfigAssetMetadataHash:
if len(sv.Bytes) != 32 {
- err = fmt.Errorf("%s must be 32 bytes", fs.field)
- } else {
- copy(txn.AssetParams.MetadataHash[:], sv.Bytes)
+ return fmt.Errorf("%s must be 32 bytes", fs.field)
}
+ copy(txn.AssetParams.MetadataHash[:], sv.Bytes)
case ConfigAssetManager:
txn.AssetParams.Manager, err = sv.address()
case ConfigAssetReserve:
@@ -3911,10 +4215,96 @@ func (cx *EvalContext) stackIntoTxnField(sv stackValue, fs txnFieldSpec, txn *tr
case FreezeAssetFrozen:
txn.AssetFrozen, err = sv.bool()
- // appl needs to wait. Can't call AVM from AVM.
-
+ // ApplicationCall
+ case ApplicationID:
+ txn.ApplicationID, err = cx.availableApp(sv)
+ case OnCompletion:
+ var onc uint64
+ onc, err = sv.uintMaxed(uint64(transactions.DeleteApplicationOC))
+ txn.OnCompletion = transactions.OnCompletion(onc)
+ case ApplicationArgs:
+ if sv.Bytes == nil {
+ return fmt.Errorf("ApplicationArg is not a byte array")
+ }
+ total := len(sv.Bytes)
+ for _, arg := range txn.ApplicationArgs {
+ total += len(arg)
+ }
+ if total > cx.Proto.MaxAppTotalArgLen {
+ return errors.New("total application args length too long")
+ }
+ if len(txn.ApplicationArgs) >= cx.Proto.MaxAppArgs {
+ return errors.New("too many application args")
+ }
+ new := make([]byte, len(sv.Bytes))
+ copy(new, sv.Bytes)
+ txn.ApplicationArgs = append(txn.ApplicationArgs, new)
+ case Accounts:
+ var new basics.Address
+ new, err = cx.availableAccount(sv)
+ if err != nil {
+ return
+ }
+ if len(txn.Accounts) >= cx.Proto.MaxAppTxnAccounts {
+ return errors.New("too many foreign accounts")
+ }
+ txn.Accounts = append(txn.Accounts, new)
+ case ApprovalProgram:
+ maxPossible := cx.Proto.MaxAppProgramLen * (1 + cx.Proto.MaxExtraAppProgramPages)
+ if len(sv.Bytes) > maxPossible {
+ return fmt.Errorf("%s may not exceed %d bytes", fs.field, maxPossible)
+ }
+ txn.ApprovalProgram = make([]byte, len(sv.Bytes))
+ copy(txn.ApprovalProgram, sv.Bytes)
+ case ClearStateProgram:
+ maxPossible := cx.Proto.MaxAppProgramLen * (1 + cx.Proto.MaxExtraAppProgramPages)
+ if len(sv.Bytes) > maxPossible {
+ return fmt.Errorf("%s may not exceed %d bytes", fs.field, maxPossible)
+ }
+ txn.ClearStateProgram = make([]byte, len(sv.Bytes))
+ copy(txn.ClearStateProgram, sv.Bytes)
+ case Assets:
+ var new basics.AssetIndex
+ new, err = cx.availableAsset(sv)
+ if err != nil {
+ return
+ }
+ if len(txn.ForeignAssets) >= cx.Proto.MaxAppTxnForeignAssets {
+ return errors.New("too many foreign assets")
+ }
+ txn.ForeignAssets = append(txn.ForeignAssets, new)
+ case Applications:
+ var new basics.AppIndex
+ new, err = cx.availableApp(sv)
+ if err != nil {
+ return
+ }
+ if len(txn.ForeignApps) >= cx.Proto.MaxAppTxnForeignApps {
+ return errors.New("too many foreign apps")
+ }
+ txn.ForeignApps = append(txn.ForeignApps, new)
+ case GlobalNumUint:
+ txn.GlobalStateSchema.NumUint, err =
+ sv.uintMaxed(cx.Proto.MaxGlobalSchemaEntries)
+ case GlobalNumByteSlice:
+ txn.GlobalStateSchema.NumByteSlice, err =
+ sv.uintMaxed(cx.Proto.MaxGlobalSchemaEntries)
+ case LocalNumUint:
+ txn.LocalStateSchema.NumUint, err =
+ sv.uintMaxed(cx.Proto.MaxLocalSchemaEntries)
+ case LocalNumByteSlice:
+ txn.LocalStateSchema.NumByteSlice, err =
+ sv.uintMaxed(cx.Proto.MaxLocalSchemaEntries)
+ case ExtraProgramPages:
+ var epp uint64
+ epp, err =
+ sv.uintMaxed(uint64(cx.Proto.MaxExtraAppProgramPages))
+ if err != nil {
+ return
+ }
+ txn.ExtraProgramPages = uint32(epp)
default:
- return fmt.Errorf("invalid itxn_field %s", fs.field)
+ err = fmt.Errorf("invalid itxn_field %s", fs.field)
}
return
}
@@ -3933,7 +4323,7 @@ func opTxField(cx *EvalContext) {
return
}
sv := cx.stack[last]
- cx.err = cx.stackIntoTxnField(sv, fs, &cx.subtxns[itx].Txn)
+ cx.err = cx.stackIntoTxnField(sv, &fs, &cx.subtxns[itx].Txn)
cx.stack = cx.stack[:last] // pop
}
@@ -3943,10 +4333,11 @@ func opTxSubmit(cx *EvalContext) {
return
}
- // Should never trigger, since itxn_next checks these too.
- if len(cx.InnerTxns)+len(cx.subtxns) > cx.Proto.MaxInnerTransactions ||
- len(cx.subtxns) > cx.Proto.MaxTxGroupSize {
- cx.err = errors.New("too many inner transactions")
+ // Should rarely trigger, since itxn_next checks these too. (but that check
+ // must be imperfect, see its comment) In contrast to that check, subtxns is
+ // already populated here.
+ if len(cx.Txn.EvalDelta.InnerTxns)+len(cx.subtxns) > cx.allowedInners() || len(cx.subtxns) > cx.Proto.MaxTxGroupSize {
+ cx.err = fmt.Errorf("too many inner transactions %d", len(cx.Txn.EvalDelta.InnerTxns)+len(cx.subtxns))
return
}
@@ -3977,6 +4368,14 @@ func opTxSubmit(cx *EvalContext) {
*cx.FeeCredit = basics.AddSaturate(*cx.FeeCredit, overpay)
}
+ // All subtxns will have zero'd GroupID since GroupID can't be set in
+ // AVM. (no need to blank it out before hashing for TxID)
+ var group transactions.TxGroup
+ var parent transactions.Txid
+ isGroup := len(cx.subtxns) > 1
+ if isGroup {
+ parent = cx.Txn.ID()
+ }
for itx := range cx.subtxns {
// The goal is to follow the same invariants used by the
// transaction pool. Namely that any transaction that makes it
@@ -3988,23 +4387,52 @@ func opTxSubmit(cx *EvalContext) {
}
// Recall that WellFormed does not care about individual
- // transaction fees because of fee pooling. So we check below.
+ // transaction fees because of fee pooling. Checked above.
cx.err = cx.subtxns[itx].Txn.WellFormed(*cx.Specials, *cx.Proto)
if cx.err != nil {
return
}
- ad, err := cx.Ledger.Perform(&cx.subtxns[itx].Txn, *cx.Specials)
+ // Disallow re-entrancy
+ if cx.subtxns[itx].Txn.Type == protocol.ApplicationCallTx {
+ if cx.appID == cx.subtxns[itx].Txn.ApplicationID {
+ cx.err = fmt.Errorf("attempt to self-call")
+ return
+ }
+ for parent := cx.caller; parent != nil; parent = parent.caller {
+ if parent.appID == cx.subtxns[itx].Txn.ApplicationID {
+ cx.err = fmt.Errorf("attempt to re-enter %d", parent.appID)
+ return
+ }
+ }
+ }
+
+ if isGroup {
+ innerOffset := len(cx.Txn.EvalDelta.InnerTxns)
+ group.TxGroupHashes = append(group.TxGroupHashes,
+ crypto.Digest(cx.subtxns[itx].Txn.InnerID(parent, innerOffset)))
+ }
+ }
+
+ if isGroup {
+ groupID := crypto.HashObj(group)
+ for itx := range cx.subtxns {
+ cx.subtxns[itx].Txn.Group = groupID
+ }
+ }
+
+ ep := NewInnerEvalParams(cx.subtxns, cx)
+ for i := range ep.TxnGroup {
+ err := cx.Ledger.Perform(i, ep)
if err != nil {
cx.err = err
return
}
-
- cx.InnerTxns = append(cx.InnerTxns, transactions.SignedTxnWithAD{
- SignedTxn: cx.subtxns[itx],
- ApplyData: ad,
- })
+ // This is mostly a no-op, because Perform does its work "in-place", but
+ // RecordAD has some further responsibilities.
+ ep.RecordAD(i, ep.TxnGroup[i].ApplyData)
}
+ cx.Txn.EvalDelta.InnerTxns = append(cx.Txn.EvalDelta.InnerTxns, ep.TxnGroup...)
cx.subtxns = nil
}
@@ -4038,7 +4466,7 @@ func (cx *EvalContext) PcDetails() (pc int, dis string) {
func base64Decode(encoded []byte, encoding *base64.Encoding) ([]byte, error) {
decoded := make([]byte, encoding.DecodedLen(len(encoded)))
- n, err := encoding.Strict().Decode(decoded, encoded)
+ n, err := encoding.Decode(decoded, encoded)
if err != nil {
return decoded[:0], err
}
diff --git a/data/transactions/logic/evalAppTxn_test.go b/data/transactions/logic/evalAppTxn_test.go
index 4c9f3a4f0..786b60bf8 100644
--- a/data/transactions/logic/evalAppTxn_test.go
+++ b/data/transactions/logic/evalAppTxn_test.go
@@ -14,92 +14,104 @@
// You should have received a copy of the GNU Affero General Public License
// along with go-algorand. If not, see <https://www.gnu.org/licenses/>.
-package logic
+package logic_test
import (
+ "encoding/hex"
"fmt"
+ "strings"
"testing"
"github.com/algorand/go-algorand/data/basics"
+ "github.com/algorand/go-algorand/data/transactions"
+ . "github.com/algorand/go-algorand/data/transactions/logic"
+ "github.com/algorand/go-algorand/data/txntest"
+ "github.com/algorand/go-algorand/protocol"
"github.com/stretchr/testify/require"
)
func TestInnerTypesV5(t *testing.T) {
- v5, _ := makeSampleEnvWithVersion(5)
+ v5, _, _ := MakeSampleEnvWithVersion(5)
// not alllowed in v5
- testApp(t, "itxn_begin; byte \"keyreg\"; itxn_field Type; itxn_submit; int 1;", v5, "keyreg is not a valid Type for itxn_field")
- testApp(t, "itxn_begin; int keyreg; itxn_field TypeEnum; itxn_submit; int 1;", v5, "keyreg is not a valid Type for itxn_field")
+ TestApp(t, "itxn_begin; byte \"keyreg\"; itxn_field Type; itxn_submit; int 1;", v5, "keyreg is not a valid Type for itxn_field")
+ TestApp(t, "itxn_begin; int keyreg; itxn_field TypeEnum; itxn_submit; int 1;", v5, "keyreg is not a valid Type for itxn_field")
+
+ TestApp(t, "itxn_begin; byte \"appl\"; itxn_field Type; itxn_submit; int 1;", v5, "appl is not a valid Type for itxn_field")
+ TestApp(t, "itxn_begin; int appl; itxn_field TypeEnum; itxn_submit; int 1;", v5, "appl is not a valid Type for itxn_field")
}
func TestCurrentInnerTypes(t *testing.T) {
- ep, ledger := makeSampleEnv()
- testApp(t, "itxn_submit; int 1;", ep, "itxn_submit without itxn_begin")
- testApp(t, "int pay; itxn_field TypeEnum; itxn_submit; int 1;", ep, "itxn_field without itxn_begin")
- testApp(t, "itxn_begin; itxn_submit; int 1;", ep, "unknown tx type")
+ ep, tx, ledger := MakeSampleEnv()
+ TestApp(t, "itxn_submit; int 1;", ep, "itxn_submit without itxn_begin")
+ TestApp(t, "int pay; itxn_field TypeEnum; itxn_submit; int 1;", ep, "itxn_field without itxn_begin")
+ TestApp(t, "itxn_begin; itxn_submit; int 1;", ep, "unknown tx type")
// bad type
- testApp(t, "itxn_begin; byte \"pya\"; itxn_field Type; itxn_submit; int 1;", ep, "pya is not a valid Type")
+ TestApp(t, "itxn_begin; byte \"pya\"; itxn_field Type; itxn_submit; int 1;", ep, "pya is not a valid Type")
// mixed up the int form for the byte form
- testApp(t, obfuscate("itxn_begin; int pay; itxn_field Type; itxn_submit; int 1;"), ep, "Type arg not a byte array")
+ TestApp(t, Obfuscate("itxn_begin; int pay; itxn_field Type; itxn_submit; int 1;"), ep, "Type arg not a byte array")
// or vice versa
- testApp(t, obfuscate("itxn_begin; byte \"pay\"; itxn_field TypeEnum; itxn_submit; int 1;"), ep, "not a uint64")
+ TestApp(t, Obfuscate("itxn_begin; byte \"pay\"; itxn_field TypeEnum; itxn_submit; int 1;"), ep, "not a uint64")
- // good types, not allowed yet
- testApp(t, "itxn_begin; byte \"appl\"; itxn_field Type; itxn_submit; int 1;", ep, "appl is not a valid Type for itxn_field")
- // same, as enums
- testApp(t, "itxn_begin; int appl; itxn_field TypeEnum; itxn_submit; int 1;", ep, "appl is not a valid Type for itxn_field")
- testApp(t, "itxn_begin; int 42; itxn_field TypeEnum; itxn_submit; int 1;", ep, "42 is not a valid TypeEnum")
- testApp(t, "itxn_begin; int 0; itxn_field TypeEnum; itxn_submit; int 1;", ep, "0 is not a valid TypeEnum")
+ // some bad types
+ TestApp(t, "itxn_begin; int 42; itxn_field TypeEnum; itxn_submit; int 1;", ep, "42 is not a valid TypeEnum")
+ TestApp(t, "itxn_begin; int 0; itxn_field TypeEnum; itxn_submit; int 1;", ep, "0 is not a valid TypeEnum")
// "insufficient balance" because app account is charged fee
// (defaults make these 0 pay|axfer to zero address, from app account)
- testApp(t, "itxn_begin; byte \"pay\"; itxn_field Type; itxn_submit; int 1;", ep, "insufficient balance")
- testApp(t, "itxn_begin; byte \"axfer\"; itxn_field Type; itxn_submit; int 1;", ep, "insufficient balance")
- testApp(t, "itxn_begin; int pay; itxn_field TypeEnum; itxn_submit; int 1;", ep, "insufficient balance")
- testApp(t, "itxn_begin; int axfer; itxn_field TypeEnum; itxn_submit; int 1;", ep, "insufficient balance")
+ TestApp(t, "itxn_begin; byte \"pay\"; itxn_field Type; itxn_submit; int 1;", ep, "insufficient balance")
+ TestApp(t, "itxn_begin; byte \"axfer\"; itxn_field Type; itxn_submit; int 1;", ep, "insufficient balance")
+ TestApp(t, "itxn_begin; int pay; itxn_field TypeEnum; itxn_submit; int 1;", ep, "insufficient balance")
+ TestApp(t, "itxn_begin; int axfer; itxn_field TypeEnum; itxn_submit; int 1;", ep, "insufficient balance")
- testApp(t, "itxn_begin; byte \"acfg\"; itxn_field Type; itxn_submit; int 1;", ep, "insufficient balance")
- testApp(t, "itxn_begin; byte \"afrz\"; itxn_field Type; itxn_submit; int 1;", ep, "insufficient balance")
- testApp(t, "itxn_begin; int acfg; itxn_field TypeEnum; itxn_submit; int 1;", ep, "insufficient balance")
- testApp(t, "itxn_begin; int afrz; itxn_field TypeEnum; itxn_submit; int 1;", ep, "insufficient balance")
+ TestApp(t, "itxn_begin; byte \"acfg\"; itxn_field Type; itxn_submit; int 1;", ep, "insufficient balance")
+ TestApp(t, "itxn_begin; byte \"afrz\"; itxn_field Type; itxn_submit; int 1;", ep, "insufficient balance")
+ TestApp(t, "itxn_begin; int acfg; itxn_field TypeEnum; itxn_submit; int 1;", ep, "insufficient balance")
+ TestApp(t, "itxn_begin; int afrz; itxn_field TypeEnum; itxn_submit; int 1;", ep, "insufficient balance")
- // alllowed since v6
- testApp(t, "itxn_begin; byte \"keyreg\"; itxn_field Type; itxn_submit; int 1;", ep, "insufficient balance")
- testApp(t, "itxn_begin; int keyreg; itxn_field TypeEnum; itxn_submit; int 1;", ep, "insufficient balance")
+ // allowed since v6
+ TestApp(t, "itxn_begin; byte \"keyreg\"; itxn_field Type; itxn_submit; int 1;", ep, "insufficient balance")
+ TestApp(t, "itxn_begin; int keyreg; itxn_field TypeEnum; itxn_submit; int 1;", ep, "insufficient balance")
+ TestApp(t, "itxn_begin; byte \"appl\"; itxn_field Type; itxn_submit; int 1;", ep, "insufficient balance")
+ TestApp(t, "itxn_begin; int appl; itxn_field TypeEnum; itxn_submit; int 1;", ep, "insufficient balance")
// Establish 888 as the app id, and fund it.
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
ledger.NewAccount(basics.AppIndex(888).Address(), 200000)
- testApp(t, "itxn_begin; byte \"pay\"; itxn_field Type; itxn_submit; int 1;", ep)
- testApp(t, "itxn_begin; int pay; itxn_field TypeEnum; itxn_submit; int 1;", ep)
+ TestApp(t, "itxn_begin; byte \"pay\"; itxn_field Type; itxn_submit; int 1;", ep)
+ TestApp(t, "itxn_begin; int pay; itxn_field TypeEnum; itxn_submit; int 1;", ep)
// Can't submit because we haven't finished setup, but type passes itxn_field
- testApp(t, "itxn_begin; byte \"axfer\"; itxn_field Type; int 1;", ep)
- testApp(t, "itxn_begin; int axfer; itxn_field TypeEnum; int 1;", ep)
- testApp(t, "itxn_begin; byte \"acfg\"; itxn_field Type; int 1;", ep)
- testApp(t, "itxn_begin; int acfg; itxn_field TypeEnum; int 1;", ep)
- testApp(t, "itxn_begin; byte \"afrz\"; itxn_field Type; int 1;", ep)
- testApp(t, "itxn_begin; int afrz; itxn_field TypeEnum; int 1;", ep)
+ TestApp(t, "itxn_begin; byte \"axfer\"; itxn_field Type; int 1;", ep)
+ TestApp(t, "itxn_begin; int axfer; itxn_field TypeEnum; int 1;", ep)
+ TestApp(t, "itxn_begin; byte \"acfg\"; itxn_field Type; int 1;", ep)
+ TestApp(t, "itxn_begin; int acfg; itxn_field TypeEnum; int 1;", ep)
+ TestApp(t, "itxn_begin; byte \"afrz\"; itxn_field Type; int 1;", ep)
+ TestApp(t, "itxn_begin; int afrz; itxn_field TypeEnum; int 1;", ep)
}
func TestFieldTypes(t *testing.T) {
- ep, _ := makeSampleEnv()
- testApp(t, "itxn_begin; byte \"pay\"; itxn_field Sender;", ep, "not an address")
- testApp(t, obfuscate("itxn_begin; int 7; itxn_field Receiver;"), ep, "not an address")
- testApp(t, "itxn_begin; byte \"\"; itxn_field CloseRemainderTo;", ep, "not an address")
- testApp(t, "itxn_begin; byte \"\"; itxn_field AssetSender;", ep, "not an address")
+ ep, _, _ := MakeSampleEnv()
+ TestApp(t, "itxn_begin; byte \"pay\"; itxn_field Sender;", ep, "not an address")
+ TestApp(t, Obfuscate("itxn_begin; int 7; itxn_field Receiver;"), ep, "not an address")
+ TestApp(t, "itxn_begin; byte \"\"; itxn_field CloseRemainderTo;", ep, "not an address")
+ TestApp(t, "itxn_begin; byte \"\"; itxn_field AssetSender;", ep, "not an address")
// can't really tell if it's an addres, so 32 bytes gets further
- testApp(t, "itxn_begin; byte \"01234567890123456789012345678901\"; itxn_field AssetReceiver;",
+ TestApp(t, "itxn_begin; byte \"01234567890123456789012345678901\"; itxn_field AssetReceiver;",
ep, "invalid Account reference")
// but a b32 string rep is not an account
- testApp(t, "itxn_begin; byte \"GAYTEMZUGU3DOOBZGAYTEMZUGU3DOOBZGAYTEMZUGU3DOOBZGAYZIZD42E\"; itxn_field AssetCloseTo;",
+ TestApp(t, "itxn_begin; byte \"GAYTEMZUGU3DOOBZGAYTEMZUGU3DOOBZGAYTEMZUGU3DOOBZGAYZIZD42E\"; itxn_field AssetCloseTo;",
ep, "not an address")
- testApp(t, obfuscate("itxn_begin; byte \"pay\"; itxn_field Fee;"), ep, "not a uint64")
- testApp(t, obfuscate("itxn_begin; byte 0x01; itxn_field Amount;"), ep, "not a uint64")
- testApp(t, obfuscate("itxn_begin; byte 0x01; itxn_field XferAsset;"), ep, "not a uint64")
- testApp(t, obfuscate("itxn_begin; byte 0x01; itxn_field AssetAmount;"), ep, "not a uint64")
+ TestApp(t, Obfuscate("itxn_begin; byte \"pay\"; itxn_field Fee;"), ep, "not a uint64")
+ TestApp(t, Obfuscate("itxn_begin; byte 0x01; itxn_field Amount;"), ep, "not a uint64")
+ TestApp(t, Obfuscate("itxn_begin; byte 0x01; itxn_field XferAsset;"), ep, "not a uint64")
+ TestApp(t, Obfuscate("itxn_begin; byte 0x01; itxn_field AssetAmount;"), ep, "not a uint64")
+
+}
+func appAddr(id int) basics.Address {
+ return basics.AppIndex(id).Address()
}
func TestAppPay(t *testing.T) {
@@ -114,25 +126,25 @@ func TestAppPay(t *testing.T) {
int 1
`
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- testApp(t, "txn Sender; balance; int 0; ==;", ep)
- testApp(t, "txn Sender; txn Accounts 1; int 100"+pay, ep, "unauthorized")
- testApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+pay, ep,
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ TestApp(t, "txn Sender; balance; int 0; ==;", ep)
+ TestApp(t, "txn Sender; txn Accounts 1; int 100"+pay, ep, "unauthorized")
+ TestApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+pay, ep,
"insufficient balance")
- ledger.NewAccount(ledger.ApplicationID().Address(), 1000000)
+ ledger.NewAccount(appAddr(888), 1000000)
- // You might expect this to fail because of min balance issue
+ // You might NewExpect this to fail because of min balance issue
// (receiving account only gets 100 microalgos). It does not fail at
// this level, instead, we must be certain that the existing min
// balance check in eval.transaction() properly notices and fails
// the transaction later. This fits with the model that we check
// min balances once at the end of each "top-level" transaction.
- testApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+pay, ep)
+ TestApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+pay, ep)
// 100 of 1000000 spent, plus MinTxnFee in our fake protocol is 1001
- testApp(t, "global CurrentApplicationAddress; balance; int 998899; ==", ep)
- testApp(t, "txn Receiver; balance; int 100; ==", ep)
+ TestApp(t, "global CurrentApplicationAddress; balance; int 998899; ==", ep)
+ TestApp(t, "txn Receiver; balance; int 100; ==", ep)
close := `
itxn_begin
@@ -141,16 +153,16 @@ func TestAppPay(t *testing.T) {
itxn_submit
int 1
`
- testApp(t, close, ep)
- testApp(t, "global CurrentApplicationAddress; balance; !", ep)
+ TestApp(t, close, ep)
+ TestApp(t, "global CurrentApplicationAddress; balance; !", ep)
// Receiver got most of the algos (except 1001 for fee)
- testApp(t, "txn Receiver; balance; int 997998; ==", ep)
+ TestApp(t, "txn Receiver; balance; int 997998; ==", ep)
}
func TestAppAssetOptIn(t *testing.T) {
- ep, ledger := makeSampleEnv()
+ ep, tx, ledger := MakeSampleEnv()
// Establish 888 as the app id, and fund it.
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
ledger.NewAccount(basics.AppIndex(888).Address(), 200000)
axfer := `
@@ -162,9 +174,9 @@ txn Sender; itxn_field AssetReceiver;
itxn_submit
int 1
`
- testApp(t, axfer, ep, "invalid Asset reference")
- ep.Txn.Txn.ForeignAssets = append(ep.Txn.Txn.ForeignAssets, 25)
- testApp(t, axfer, ep, "not opted in") // app account not opted in
+ TestApp(t, axfer, ep, "invalid Asset reference")
+ tx.ForeignAssets = append(tx.ForeignAssets, 25)
+ TestApp(t, axfer, ep, "not opted in") // app account not opted in
optin := `
itxn_begin
int axfer; itxn_field TypeEnum;
@@ -174,23 +186,23 @@ global CurrentApplicationAddress; itxn_field AssetReceiver;
itxn_submit
int 1
`
- testApp(t, optin, ep, "does not exist")
+ TestApp(t, optin, ep, "does not exist")
// Asset 25
- ledger.NewAsset(ep.Txn.Txn.Sender, 25, basics.AssetParams{
+ ledger.NewAsset(tx.Sender, 25, basics.AssetParams{
Total: 10,
UnitName: "x",
AssetName: "Cross",
})
- testApp(t, optin, ep)
+ TestApp(t, optin, ep)
- testApp(t, axfer, ep, "insufficient balance") // opted in, but balance=0
+ TestApp(t, axfer, ep, "insufficient balance") // opted in, but balance=0
// Fund the app account with the asset
ledger.NewHolding(basics.AppIndex(888).Address(), 25, 5, false)
- testApp(t, axfer, ep)
- testApp(t, axfer, ep)
- testApp(t, axfer, ep, "insufficient balance") // balance = 1, tried to move 2)
- testApp(t, "global CurrentApplicationAddress; int 25; asset_holding_get AssetBalance; assert; int 1; ==", ep)
+ TestApp(t, axfer, ep)
+ TestApp(t, axfer, ep)
+ TestApp(t, axfer, ep, "insufficient balance") // balance = 1, tried to move 2)
+ TestApp(t, "global CurrentApplicationAddress; int 25; asset_holding_get AssetBalance; assert; int 1; ==", ep)
close := `
itxn_begin
@@ -202,8 +214,8 @@ txn Sender; itxn_field AssetCloseTo;
itxn_submit
int 1
`
- testApp(t, close, ep)
- testApp(t, "global CurrentApplicationAddress; int 25; asset_holding_get AssetBalance; !; assert; !", ep)
+ TestApp(t, close, ep)
+ TestApp(t, "global CurrentApplicationAddress; int 25; asset_holding_get AssetBalance; !; assert; !", ep)
}
func TestRekeyPay(t *testing.T) {
@@ -217,13 +229,13 @@ func TestRekeyPay(t *testing.T) {
itxn_submit
`
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- testApp(t, "txn Sender; balance; int 0; ==;", ep)
- testApp(t, "txn Sender; txn Accounts 1; int 100"+pay, ep, "unauthorized")
- ledger.NewAccount(ep.Txn.Txn.Sender, 120+ep.Proto.MinTxnFee)
- ledger.Rekey(ep.Txn.Txn.Sender, basics.AppIndex(888).Address())
- testApp(t, "txn Sender; txn Accounts 1; int 100"+pay+"; int 1", ep)
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ TestApp(t, "txn Sender; balance; int 0; ==;", ep)
+ TestApp(t, "txn Sender; txn Accounts 1; int 100"+pay, ep, "unauthorized")
+ ledger.NewAccount(tx.Sender, 120+ep.Proto.MinTxnFee)
+ ledger.Rekey(tx.Sender, basics.AppIndex(888).Address())
+ TestApp(t, "txn Sender; txn Accounts 1; int 100"+pay+"; int 1", ep)
// Note that the Sender would fail min balance check if we did it here.
// It seems proper to wait until end of txn though.
// See explanation in logicLedger's Perform()
@@ -242,15 +254,15 @@ func TestRekeyBack(t *testing.T) {
itxn_submit
`
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- testApp(t, "txn Sender; balance; int 0; ==;", ep)
- testApp(t, "txn Sender; txn Accounts 1; int 100"+payAndUnkey, ep, "unauthorized")
- ledger.NewAccount(ep.Txn.Txn.Sender, 120+3*ep.Proto.MinTxnFee)
- ledger.Rekey(ep.Txn.Txn.Sender, basics.AppIndex(888).Address())
- testApp(t, "txn Sender; txn Accounts 1; int 100"+payAndUnkey+"; int 1", ep)
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ TestApp(t, "txn Sender; balance; int 0; ==;", ep)
+ TestApp(t, "txn Sender; txn Accounts 1; int 100"+payAndUnkey, ep, "unauthorized")
+ ledger.NewAccount(tx.Sender, 120+3*ep.Proto.MinTxnFee)
+ ledger.Rekey(tx.Sender, basics.AppIndex(888).Address())
+ TestApp(t, "txn Sender; txn Accounts 1; int 100"+payAndUnkey+"; int 1", ep)
// now rekeyed back to original
- testApp(t, "txn Sender; txn Accounts 1; int 100"+payAndUnkey, ep, "unauthorized")
+ TestApp(t, "txn Sender; txn Accounts 1; int 100"+payAndUnkey, ep, "unauthorized")
}
func TestDefaultSender(t *testing.T) {
@@ -263,13 +275,13 @@ func TestDefaultSender(t *testing.T) {
itxn_submit
`
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- ep.Txn.Txn.Accounts = append(ep.Txn.Txn.Accounts, ledger.ApplicationID().Address())
- testApp(t, "txn Accounts 1; int 100"+pay, ep, "insufficient balance")
- ledger.NewAccount(ledger.ApplicationID().Address(), 1000000)
- testApp(t, "txn Accounts 1; int 100"+pay+"int 1", ep)
- testApp(t, "global CurrentApplicationAddress; balance; int 998899; ==", ep)
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ tx.Accounts = append(tx.Accounts, appAddr(888))
+ TestApp(t, "txn Accounts 1; int 100"+pay, ep, "insufficient balance")
+ ledger.NewAccount(appAddr(888), 1000000)
+ TestApp(t, "txn Accounts 1; int 100"+pay+"int 1", ep)
+ TestApp(t, "global CurrentApplicationAddress; balance; int 998899; ==", ep)
}
func TestAppAxfer(t *testing.T) {
@@ -285,34 +297,34 @@ func TestAppAxfer(t *testing.T) {
itxn_submit
`
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- ledger.NewAsset(ep.Txn.Txn.Receiver, 777, basics.AssetParams{}) // not in foreign-assets of sample
- ledger.NewAsset(ep.Txn.Txn.Receiver, 77, basics.AssetParams{}) // in foreign-assets of sample
- testApp(t, "txn Sender; int 777; asset_holding_get AssetBalance; assert; int 0; ==;", ep,
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAsset(tx.Receiver, 777, basics.AssetParams{}) // not in foreign-assets of sample
+ ledger.NewAsset(tx.Receiver, 77, basics.AssetParams{}) // in foreign-assets of sample
+ TestApp(t, "txn Sender; int 777; asset_holding_get AssetBalance; assert; int 0; ==;", ep,
"invalid Asset reference") // 777 not in foreign-assets
- testApp(t, "txn Sender; int 77; asset_holding_get AssetBalance; assert; int 0; ==;", ep,
+ TestApp(t, "txn Sender; int 77; asset_holding_get AssetBalance; assert; int 0; ==;", ep,
"assert failed") // because Sender not opted-in
- testApp(t, "global CurrentApplicationAddress; int 77; asset_holding_get AssetBalance; assert; int 0; ==;", ep,
+ TestApp(t, "global CurrentApplicationAddress; int 77; asset_holding_get AssetBalance; assert; int 0; ==;", ep,
"assert failed") // app account not opted in
- ledger.NewAccount(ledger.ApplicationID().Address(), 10000) // plenty for fees
- ledger.NewHolding(ledger.ApplicationID().Address(), 77, 3000, false)
- testApp(t, "global CurrentApplicationAddress; int 77; asset_holding_get AssetBalance; assert; int 3000; ==;", ep)
+ ledger.NewAccount(appAddr(888), 10000) // plenty for fees
+ ledger.NewHolding(appAddr(888), 77, 3000, false)
+ TestApp(t, "global CurrentApplicationAddress; int 77; asset_holding_get AssetBalance; assert; int 3000; ==;", ep)
- testApp(t, "txn Sender; txn Accounts 1; int 100"+axfer, ep, "unauthorized")
- testApp(t, "global CurrentApplicationAddress; txn Accounts 0; int 100"+axfer, ep,
- fmt.Sprintf("Receiver (%s) not opted in", ep.Txn.Txn.Sender)) // txn.Sender (receiver of the axfer) isn't opted in
- testApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100000"+axfer, ep,
+ TestApp(t, "txn Sender; txn Accounts 1; int 100"+axfer, ep, "unauthorized")
+ TestApp(t, "global CurrentApplicationAddress; txn Accounts 0; int 100"+axfer, ep,
+ fmt.Sprintf("Receiver (%s) not opted in", tx.Sender)) // txn.Sender (receiver of the axfer) isn't opted in
+ TestApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100000"+axfer, ep,
"insufficient balance")
// Temporarily remove from ForeignAssets to ensure App Account
// doesn't get some sort of free pass to send arbitrary assets.
- save := ep.Txn.Txn.ForeignAssets
- ep.Txn.Txn.ForeignAssets = []basics.AssetIndex{6, 10}
- testApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100000"+axfer, ep,
+ save := tx.ForeignAssets
+ tx.ForeignAssets = []basics.AssetIndex{6, 10}
+ TestApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100000"+axfer, ep,
"invalid Asset reference 77")
- ep.Txn.Txn.ForeignAssets = save
+ tx.ForeignAssets = save
noid := `
itxn_begin
@@ -323,14 +335,14 @@ func TestAppAxfer(t *testing.T) {
itxn_field TypeEnum
itxn_submit
`
- testApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+noid+"int 1", ep,
- fmt.Sprintf("Sender (%s) not opted in to 0", ledger.ApplicationID().Address()))
+ TestApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+noid+"int 1", ep,
+ fmt.Sprintf("Sender (%s) not opted in to 0", appAddr(888)))
- testApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+axfer+"int 1", ep)
+ TestApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+axfer+"int 1", ep)
// 100 of 3000 spent
- testApp(t, "global CurrentApplicationAddress; int 77; asset_holding_get AssetBalance; assert; int 2900; ==", ep)
- testApp(t, "txn Accounts 1; int 77; asset_holding_get AssetBalance; assert; int 100; ==", ep)
+ TestApp(t, "global CurrentApplicationAddress; int 77; asset_holding_get AssetBalance; assert; int 2900; ==", ep)
+ TestApp(t, "txn Accounts 1; int 77; asset_holding_get AssetBalance; assert; int 100; ==", ep)
}
func TestExtraFields(t *testing.T) {
@@ -345,11 +357,11 @@ func TestExtraFields(t *testing.T) {
itxn_submit
`
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- testApp(t, "txn Sender; balance; int 0; ==;", ep)
- testApp(t, "txn Sender; txn Accounts 1; int 100"+pay, ep, "unauthorized")
- testApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+pay, ep,
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ TestApp(t, "txn Sender; balance; int 0; ==;", ep)
+ TestApp(t, "txn Sender; txn Accounts 1; int 100"+pay, ep, "unauthorized")
+ TestApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+pay, ep,
"non-zero fields for type axfer")
}
@@ -363,14 +375,16 @@ func TestBadFieldV5(t *testing.T) {
int pay
itxn_field TypeEnum
txn Receiver
- itxn_field RekeyTo // NOT ALLOWED
+ itxn_field Sender // Will be changed to RekeyTo
itxn_submit
`
- ep, ledger := makeSampleEnvWithVersion(5)
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- testApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+pay, ep,
- "invalid itxn_field RekeyTo")
+ ep, tx, ledger := MakeSampleEnvWithVersion(5)
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ // Assemble a good program, then change the field to a bad one
+ ops := TestProg(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+pay, 5)
+ ops.Program[len(ops.Program)-2] = byte(RekeyTo)
+ TestAppBytes(t, ops.Program, ep, "invalid itxn_field RekeyTo")
}
func TestBadField(t *testing.T) {
@@ -383,19 +397,20 @@ func TestBadField(t *testing.T) {
int pay
itxn_field TypeEnum
txn Receiver
- itxn_field RekeyTo // ALLOWED, since v6
+ itxn_field RekeyTo // ALLOWED, since v6
int 10
- itxn_field FirstValid // NOT ALLOWED
+ itxn_field Amount // Will be changed to FirstValid
itxn_submit
`
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- testApp(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+pay, ep,
- "invalid itxn_field FirstValid")
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ops := TestProg(t, "global CurrentApplicationAddress; txn Accounts 1; int 100"+pay, AssemblerMaxVersion)
+ ops.Program[len(ops.Program)-2] = byte(FirstValid)
+ TestAppBytes(t, ops.Program, ep, "invalid itxn_field FirstValid")
}
-func TestNumInner(t *testing.T) {
+func TestNumInnerShallow(t *testing.T) {
pay := `
itxn_begin
int 1
@@ -407,15 +422,68 @@ func TestNumInner(t *testing.T) {
itxn_submit
`
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- ledger.NewAccount(ledger.ApplicationID().Address(), 1000000)
- testApp(t, pay+";int 1", ep)
- testApp(t, pay+pay+";int 1", ep)
- testApp(t, pay+pay+pay+";int 1", ep)
- testApp(t, pay+pay+pay+pay+";int 1", ep)
+ ep, tx, ledger := MakeSampleEnv()
+ ep.Proto.EnableInnerTransactionPooling = false
+ ep.Reset()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 1000000)
+ TestApp(t, pay+";int 1", ep)
+ TestApp(t, pay+pay+";int 1", ep)
+ TestApp(t, pay+pay+pay+";int 1", ep)
+ TestApp(t, pay+pay+pay+pay+";int 1", ep)
// In the sample proto, MaxInnerTransactions = 4
- testApp(t, pay+pay+pay+pay+pay+";int 1", ep, "too many inner transactions")
+ TestApp(t, pay+pay+pay+pay+pay+";int 1", ep, "too many inner transactions")
+
+ ep, tx, ledger = MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 1000000)
+ TestApp(t, pay+";int 1", ep)
+ TestApp(t, pay+pay+";int 1", ep)
+ TestApp(t, pay+pay+pay+";int 1", ep)
+ TestApp(t, pay+pay+pay+pay+";int 1", ep)
+ // In the sample proto, MaxInnerTransactions = 4, but when pooling you get
+ // MaxTxGroupSize (here, 8) * that.
+ TestApp(t, pay+pay+pay+pay+pay+";int 1", ep)
+ TestApp(t, strings.Repeat(pay, 32)+";int 1", ep)
+ TestApp(t, strings.Repeat(pay, 33)+";int 1", ep, "too many inner transactions")
+}
+
+// TestNumInnerPooled ensures that inner call limits are pooled across app calls
+// in a group.
+func TestNumInnerPooled(t *testing.T) {
+ pay := `
+ itxn_begin
+ int 1
+ itxn_field Amount
+ txn Accounts 0
+ itxn_field Receiver
+ int pay
+ itxn_field TypeEnum
+ itxn_submit
+`
+
+ tx := txntest.Txn{
+ Type: protocol.ApplicationCallTx,
+ }.SignedTxn()
+ ledger := MakeLedger(nil)
+ ledger.NewApp(tx.Txn.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 1000000)
+ short := pay + ";int 1"
+ long := strings.Repeat(pay, 17) + ";int 1" // More than half allowed
+
+ grp := MakeSampleTxnGroup(tx)
+ TestApps(t, []string{short, ""}, grp, LogicVersion, ledger)
+ TestApps(t, []string{short, short}, grp, LogicVersion, ledger)
+ TestApps(t, []string{long, ""}, grp, LogicVersion, ledger)
+ TestApps(t, []string{short, long}, grp, LogicVersion, ledger)
+ TestApps(t, []string{long, short}, grp, LogicVersion, ledger)
+ TestApps(t, []string{long, long}, grp, LogicVersion, ledger,
+ NewExpect(1, "too many inner transactions"))
+ grp = append(grp, grp[0])
+ TestApps(t, []string{short, long, long}, grp, LogicVersion, ledger,
+ NewExpect(2, "too many inner transactions"))
+ TestApps(t, []string{long, long, long}, grp, LogicVersion, ledger,
+ NewExpect(1, "too many inner transactions"))
}
func TestAssetCreate(t *testing.T) {
@@ -436,12 +504,12 @@ func TestAssetCreate(t *testing.T) {
itxn_submit
int 1
`
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- testApp(t, create, ep, "insufficient balance")
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ TestApp(t, create, ep, "insufficient balance")
// Give it enough for fee. Recall that we don't check min balance at this level.
- ledger.NewAccount(ledger.ApplicationID().Address(), defaultEvalProto().MinTxnFee)
- testApp(t, create, ep)
+ ledger.NewAccount(appAddr(888), MakeTestProto().MinTxnFee)
+ TestApp(t, create, ep)
}
func TestAssetFreeze(t *testing.T) {
@@ -456,98 +524,98 @@ func TestAssetFreeze(t *testing.T) {
global CurrentApplicationAddress ; itxn_field ConfigAssetFreeze;
itxn_submit
itxn CreatedAssetID
- int 889
+ int 5000
==
`
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
// Give it enough for fees. Recall that we don't check min balance at this level.
- ledger.NewAccount(ledger.ApplicationID().Address(), 12*defaultEvalProto().MinTxnFee)
- testApp(t, create, ep)
+ ledger.NewAccount(appAddr(888), 12*MakeTestProto().MinTxnFee)
+ TestApp(t, create, ep)
freeze := `
itxn_begin
int afrz ; itxn_field TypeEnum
- int 889 ; itxn_field FreezeAsset
+ int 5000 ; itxn_field FreezeAsset
txn ApplicationArgs 0; btoi ; itxn_field FreezeAssetFrozen
txn Accounts 1 ; itxn_field FreezeAssetAccount
itxn_submit
int 1
`
- testApp(t, freeze, ep, "invalid Asset reference")
- ep.Txn.Txn.ForeignAssets = []basics.AssetIndex{basics.AssetIndex(889)}
- ep.Txn.Txn.ApplicationArgs = [][]byte{{0x01}}
- testApp(t, freeze, ep, "does not hold Asset")
- ledger.NewHolding(ep.Txn.Txn.Receiver, 889, 55, false)
- testApp(t, freeze, ep)
- holding, err := ledger.AssetHolding(ep.Txn.Txn.Receiver, 889)
+ TestApp(t, freeze, ep, "invalid Asset reference")
+ tx.ForeignAssets = []basics.AssetIndex{basics.AssetIndex(5000)}
+ tx.ApplicationArgs = [][]byte{{0x01}}
+ TestApp(t, freeze, ep, "does not hold Asset")
+ ledger.NewHolding(tx.Receiver, 5000, 55, false)
+ TestApp(t, freeze, ep)
+ holding, err := ledger.AssetHolding(tx.Receiver, 5000)
require.NoError(t, err)
require.Equal(t, true, holding.Frozen)
- ep.Txn.Txn.ApplicationArgs = [][]byte{{0x00}}
- testApp(t, freeze, ep)
- holding, err = ledger.AssetHolding(ep.Txn.Txn.Receiver, 889)
+ tx.ApplicationArgs = [][]byte{{0x00}}
+ TestApp(t, freeze, ep)
+ holding, err = ledger.AssetHolding(tx.Receiver, 5000)
require.NoError(t, err)
require.Equal(t, false, holding.Frozen)
}
func TestFieldSetting(t *testing.T) {
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- ledger.NewAccount(ledger.ApplicationID().Address(), 10*defaultEvalProto().MinTxnFee)
- testApp(t, "itxn_begin; int 500; bzero; itxn_field Note; int 1", ep)
- testApp(t, "itxn_begin; int 501; bzero; itxn_field Note; int 1", ep,
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 10*MakeTestProto().MinTxnFee)
+ TestApp(t, "itxn_begin; int 500; bzero; itxn_field Note; int 1", ep)
+ TestApp(t, "itxn_begin; int 501; bzero; itxn_field Note; int 1", ep,
"Note may not exceed")
- testApp(t, "itxn_begin; int 32; bzero; itxn_field VotePK; int 1", ep)
- testApp(t, "itxn_begin; int 31; bzero; itxn_field VotePK; int 1", ep,
+ TestApp(t, "itxn_begin; int 32; bzero; itxn_field VotePK; int 1", ep)
+ TestApp(t, "itxn_begin; int 31; bzero; itxn_field VotePK; int 1", ep,
"VotePK must be 32")
- testApp(t, "itxn_begin; int 32; bzero; itxn_field SelectionPK; int 1", ep)
- testApp(t, "itxn_begin; int 33; bzero; itxn_field SelectionPK; int 1", ep,
+ TestApp(t, "itxn_begin; int 32; bzero; itxn_field SelectionPK; int 1", ep)
+ TestApp(t, "itxn_begin; int 33; bzero; itxn_field SelectionPK; int 1", ep,
"SelectionPK must be 32")
- testApp(t, "itxn_begin; int 32; bzero; itxn_field RekeyTo; int 1", ep)
- testApp(t, "itxn_begin; int 31; bzero; itxn_field RekeyTo; int 1", ep,
+ TestApp(t, "itxn_begin; int 32; bzero; itxn_field RekeyTo; int 1", ep)
+ TestApp(t, "itxn_begin; int 31; bzero; itxn_field RekeyTo; int 1", ep,
"not an address")
- testApp(t, "itxn_begin; int 6; bzero; itxn_field ConfigAssetUnitName; int 1", ep)
- testApp(t, "itxn_begin; int 7; bzero; itxn_field ConfigAssetUnitName; int 1", ep,
+ TestApp(t, "itxn_begin; int 6; bzero; itxn_field ConfigAssetUnitName; int 1", ep)
+ TestApp(t, "itxn_begin; int 7; bzero; itxn_field ConfigAssetUnitName; int 1", ep,
"value is too long")
- testApp(t, "itxn_begin; int 12; bzero; itxn_field ConfigAssetName; int 1", ep)
- testApp(t, "itxn_begin; int 13; bzero; itxn_field ConfigAssetName; int 1", ep,
+ TestApp(t, "itxn_begin; int 12; bzero; itxn_field ConfigAssetName; int 1", ep)
+ TestApp(t, "itxn_begin; int 13; bzero; itxn_field ConfigAssetName; int 1", ep,
"value is too long")
}
func TestInnerGroup(t *testing.T) {
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
// Need both fees and both payments
- ledger.NewAccount(ledger.ApplicationID().Address(), 999+2*defaultEvalProto().MinTxnFee)
+ ledger.NewAccount(appAddr(888), 999+2*MakeTestProto().MinTxnFee)
pay := `
int pay; itxn_field TypeEnum;
int 500; itxn_field Amount;
txn Sender; itxn_field Receiver;
`
- testApp(t, "itxn_begin"+pay+"itxn_next"+pay+"itxn_submit; int 1", ep,
+ TestApp(t, "itxn_begin"+pay+"itxn_next"+pay+"itxn_submit; int 1", ep,
"insufficient balance")
// NewAccount overwrites the existing balance
- ledger.NewAccount(ledger.ApplicationID().Address(), 1000+2*defaultEvalProto().MinTxnFee)
- testApp(t, "itxn_begin"+pay+"itxn_next"+pay+"itxn_submit; int 1", ep)
+ ledger.NewAccount(appAddr(888), 1000+2*MakeTestProto().MinTxnFee)
+ TestApp(t, "itxn_begin"+pay+"itxn_next"+pay+"itxn_submit; int 1", ep)
}
func TestInnerFeePooling(t *testing.T) {
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- ledger.NewAccount(ledger.ApplicationID().Address(), 50_000)
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
pay := `
int pay; itxn_field TypeEnum;
int 500; itxn_field Amount;
txn Sender; itxn_field Receiver;
`
// Force the first fee to 3, but the second will default to 2*fee-3 = 2002-3
- testApp(t, "itxn_begin"+
+ TestApp(t, "itxn_begin"+
pay+
"int 3; itxn_field Fee;"+
"itxn_next"+
@@ -555,7 +623,7 @@ txn Sender; itxn_field Receiver;
"itxn_submit; itxn Fee; int 1999; ==", ep)
// Same first, but force the second too low
- testApp(t, "itxn_begin"+
+ TestApp(t, "itxn_begin"+
pay+
"int 3; itxn_field Fee;"+
"itxn_next"+
@@ -564,7 +632,7 @@ txn Sender; itxn_field Receiver;
"itxn_submit; int 1", ep, "fee too small")
// Overpay in first itxn, the second will default to less
- testApp(t, "itxn_begin"+
+ TestApp(t, "itxn_begin"+
pay+
"int 2000; itxn_field Fee;"+
"itxn_next"+
@@ -572,7 +640,7 @@ txn Sender; itxn_field Receiver;
"itxn_submit; itxn Fee; int 2; ==", ep)
// Same first, but force the second too low
- testApp(t, "itxn_begin"+
+ TestApp(t, "itxn_begin"+
pay+
"int 2000; itxn_field Fee;"+
"itxn_next"+
@@ -580,3 +648,958 @@ txn Sender; itxn_field Receiver;
"int 1; itxn_field Fee;"+
"itxn_submit; itxn Fee; int 1", ep, "fee too small")
}
+
+// TestApplCreation is only determining what appl transactions can be
+// constructed not what can be submitted, so it tests what "bad" fields cause
+// immediate failures.
+func TestApplCreation(t *testing.T) {
+ ep, tx, _ := MakeSampleEnv()
+
+ p := "itxn_begin;"
+ s := "; int 1"
+
+ TestApp(t, p+"int 31; itxn_field ApplicationID"+s, ep,
+ "invalid App reference")
+ tx.ForeignApps = append(tx.ForeignApps, 31)
+ TestApp(t, p+"int 31; itxn_field ApplicationID"+s, ep)
+
+ TestApp(t, p+"int 0; itxn_field OnCompletion"+s, ep)
+ TestApp(t, p+"int 1; itxn_field OnCompletion"+s, ep)
+ TestApp(t, p+"int 2; itxn_field OnCompletion"+s, ep)
+ TestApp(t, p+"int 3; itxn_field OnCompletion"+s, ep)
+ TestApp(t, p+"int 4; itxn_field OnCompletion"+s, ep)
+ TestApp(t, p+"int 5; itxn_field OnCompletion"+s, ep)
+ TestApp(t, p+"int 6; itxn_field OnCompletion"+s, ep, "6 is larger than max=5")
+ TestApp(t, p+"int NoOp; itxn_field OnCompletion"+s, ep)
+ TestApp(t, p+"int OptIn; itxn_field OnCompletion"+s, ep)
+ TestApp(t, p+"int CloseOut; itxn_field OnCompletion"+s, ep)
+ TestApp(t, p+"int ClearState; itxn_field OnCompletion"+s, ep)
+ TestApp(t, p+"int UpdateApplication; itxn_field OnCompletion"+s, ep)
+ TestApp(t, p+"int DeleteApplication; itxn_field OnCompletion"+s, ep)
+
+ TestApp(t, p+"int 800; bzero; itxn_field ApplicationArgs"+s, ep)
+ TestApp(t, p+"int 801; bzero; itxn_field ApplicationArgs", ep,
+ "length too long")
+ TestApp(t, p+"int 401; bzero; dup; itxn_field ApplicationArgs; itxn_field ApplicationArgs", ep,
+ "length too long")
+
+ TestApp(t, p+strings.Repeat("byte 0x11; itxn_field ApplicationArgs;", 12)+s, ep)
+ TestApp(t, p+strings.Repeat("byte 0x11; itxn_field ApplicationArgs;", 13)+s, ep,
+ "too many application args")
+
+ TestApp(t, p+strings.Repeat("int 32; bzero; itxn_field Accounts;", 3)+s, ep,
+ "invalid Account reference")
+ tx.Accounts = append(tx.Accounts, basics.Address{})
+ TestApp(t, fmt.Sprintf(p+"%s"+s,
+ strings.Repeat("int 32; bzero; itxn_field Accounts;", 3)), ep)
+ TestApp(t, fmt.Sprintf(p+"%s"+s,
+ strings.Repeat("int 32; bzero; itxn_field Accounts;", 4)), ep,
+ "too many foreign accounts")
+
+ TestApp(t, p+strings.Repeat("int 621; itxn_field Applications;", 5)+s, ep,
+ "invalid App reference")
+ tx.ForeignApps = append(tx.ForeignApps, basics.AppIndex(621))
+ TestApp(t, p+strings.Repeat("int 621; itxn_field Applications;", 5)+s, ep)
+ TestApp(t, p+strings.Repeat("int 621; itxn_field Applications;", 6)+s, ep,
+ "too many foreign apps")
+
+ TestApp(t, p+strings.Repeat("int 621; itxn_field Assets;", 6)+s, ep,
+ "invalid Asset reference")
+ tx.ForeignAssets = append(tx.ForeignAssets, basics.AssetIndex(621))
+ TestApp(t, p+strings.Repeat("int 621; itxn_field Assets;", 6)+s, ep)
+ TestApp(t, p+strings.Repeat("int 621; itxn_field Assets;", 7)+s, ep,
+ "too many foreign assets")
+
+ TestApp(t, p+"int 2700; bzero; itxn_field ApprovalProgram"+s, ep)
+ TestApp(t, p+"int 2701; bzero; itxn_field ApprovalProgram"+s, ep,
+ "may not exceed 2700")
+ TestApp(t, p+"int 2700; bzero; itxn_field ClearStateProgram"+s, ep)
+ TestApp(t, p+"int 2701; bzero; itxn_field ClearStateProgram"+s, ep,
+ "may not exceed 2700")
+
+ TestApp(t, p+"int 30; itxn_field GlobalNumUint"+s, ep)
+ TestApp(t, p+"int 31; itxn_field GlobalNumUint"+s, ep, "31 is larger than max=30")
+ TestApp(t, p+"int 30; itxn_field GlobalNumByteSlice"+s, ep)
+ TestApp(t, p+"int 31; itxn_field GlobalNumByteSlice"+s, ep, "31 is larger than max=30")
+ TestApp(t, p+"int 20; itxn_field GlobalNumUint; int 11; itxn_field GlobalNumByteSlice"+s, ep)
+
+ TestApp(t, p+"int 13; itxn_field LocalNumUint"+s, ep)
+ TestApp(t, p+"int 14; itxn_field LocalNumUint"+s, ep, "14 is larger than max=13")
+ TestApp(t, p+"int 13; itxn_field LocalNumByteSlice"+s, ep)
+ TestApp(t, p+"int 14; itxn_field LocalNumByteSlice"+s, ep, "14 is larger than max=13")
+
+ TestApp(t, p+"int 2; itxn_field ExtraProgramPages"+s, ep)
+ TestApp(t, p+"int 3; itxn_field ExtraProgramPages"+s, ep, "3 is larger than max=2")
+}
+
+// TestApplSubmission tests for checking of illegal appl transaction in form
+// only. Things where interactions between two different fields causes the
+// error. These are not exhaustive, but certainly demonstrate that WellFormed
+// is getting a crack at the txn.
+func TestApplSubmission(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ // Since the fee is moved first, fund the app
+ ledger.NewAccount(appAddr(888), 50_000)
+
+ ops := TestProg(t, "int 1", AssemblerMaxVersion)
+ approve := hex.EncodeToString(ops.Program)
+ a := fmt.Sprintf("byte 0x%s; itxn_field ApprovalProgram;", approve)
+
+ p := "itxn_begin; int appl; itxn_field TypeEnum;"
+ s := ";itxn_submit; int 1"
+ TestApp(t, p+a+s, ep)
+
+ // All zeros is v0, so we get a complaint, but that means lengths were ok.
+ TestApp(t, p+a+`int 600; bzero; itxn_field ApprovalProgram;
+ int 600; bzero; itxn_field ClearStateProgram;`+s, ep,
+ "program version must be")
+
+ TestApp(t, p+`int 601; bzero; itxn_field ApprovalProgram;
+ int 600; bzero; itxn_field ClearStateProgram;`+s, ep, "too long")
+
+ // WellFormed does the math based on the supplied ExtraProgramPages
+ TestApp(t, p+a+`int 1; itxn_field ExtraProgramPages
+ int 1200; bzero; itxn_field ApprovalProgram;
+ int 1200; bzero; itxn_field ClearStateProgram;`+s, ep,
+ "program version must be")
+ TestApp(t, p+`int 1; itxn_field ExtraProgramPages
+ int 1200; bzero; itxn_field ApprovalProgram;
+ int 1201; bzero; itxn_field ClearStateProgram;`+s, ep, "too long")
+
+ // Can't set epp when app id is given
+ tx.ForeignApps = append(tx.ForeignApps, basics.AppIndex(7))
+ TestApp(t, p+`int 1; itxn_field ExtraProgramPages;
+ int 7; itxn_field ApplicationID`+s, ep, "immutable")
+
+ TestApp(t, p+"int 20; itxn_field GlobalNumUint; int 11; itxn_field GlobalNumByteSlice"+s,
+ ep, "too large")
+ TestApp(t, p+"int 7; itxn_field LocalNumUint; int 7; itxn_field LocalNumByteSlice"+s,
+ ep, "too large")
+}
+
+func TestInnerApplCreate(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+
+ ops := TestProg(t, "int 1", AssemblerMaxVersion)
+ approve := "byte 0x" + hex.EncodeToString(ops.Program)
+
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+`+approve+`; itxn_field ApprovalProgram
+`+approve+`; itxn_field ClearStateProgram
+int 1; itxn_field GlobalNumUint
+int 2; itxn_field LocalNumByteSlice
+int 3; itxn_field LocalNumUint
+itxn_submit
+int 1
+`, ep)
+
+ TestApp(t, `
+int 5000; app_params_get AppGlobalNumByteSlice; assert; int 0; ==; assert
+`, ep, "invalid App reference")
+
+ call := `
+itxn_begin
+int appl; itxn_field TypeEnum
+int 5000; itxn_field ApplicationID
+itxn_submit
+int 1
+`
+ // Can't call it either
+ TestApp(t, call, ep, "invalid App reference")
+
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(5000)}
+ TestApp(t, `
+int 5000; app_params_get AppGlobalNumByteSlice; assert; int 0; ==; assert
+int 5000; app_params_get AppGlobalNumUint; assert; int 1; ==; assert
+int 5000; app_params_get AppLocalNumByteSlice; assert; int 2; ==; assert
+int 5000; app_params_get AppLocalNumUint; assert; int 3; ==; assert
+int 1
+`, ep)
+
+ // Call it (default OnComplete is NoOp)
+ TestApp(t, call, ep)
+
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+int DeleteApplication; itxn_field OnCompletion
+txn Applications 1; itxn_field ApplicationID
+itxn_submit
+int 1
+`, ep)
+
+ // App is gone
+ TestApp(t, `
+int 5000; app_params_get AppGlobalNumByteSlice; !; assert; !; assert; int 1
+`, ep)
+
+ // Can't call it either
+ TestApp(t, call, ep, "No application")
+
+}
+
+func TestCreateOldAppFails(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+
+ ops := TestProg(t, "int 1", InnerAppsEnabledVersion-1)
+ approve := "byte 0x" + hex.EncodeToString(ops.Program)
+
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+`+approve+`; itxn_field ApprovalProgram
+`+approve+`; itxn_field ClearStateProgram
+int 1; itxn_field GlobalNumUint
+int 2; itxn_field LocalNumByteSlice
+int 3; itxn_field LocalNumUint
+itxn_submit
+int 1
+`, ep, "program version must be >=")
+}
+
+func TestSelfReentrancy(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+int 888; itxn_field ApplicationID
+itxn_submit
+int 1
+`, ep, "attempt to self-call")
+}
+
+func TestIndirectReentrancy(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ call888 := TestProg(t, `itxn_begin
+int appl; itxn_field TypeEnum
+int 888; itxn_field ApplicationID
+itxn_submit
+int 1
+`, AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: call888.Program,
+ })
+
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222)}
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+int 888; itxn_field Applications
+itxn_submit
+int 1
+`, ep, "attempt to re-enter 888")
+}
+
+// TestInnerAppID ensures that inner app properly sees its AppId. This seems
+// needlessly picky to test, but the appID used to be stored outside the cx.
+func TestInnerAppID(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ logID := TestProg(t, `global CurrentApplicationID; itob; log; int 1`, AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: logID.Program,
+ })
+
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222)}
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit
+itxn Logs 0
+btoi
+int 222
+==
+`, ep)
+}
+
+// TestInnerBudgetIncrement ensures that an app can make a (nearly) empty inner
+// app call in order to get 700 extra opcode budget. Unfortunately, it costs a
+// bit to create the call, and the app itself consumes 1, so it ends up being
+// about 690 (see next test).
+func TestInnerBudgetIncrement(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ gasup := TestProg(t, "pushint 1", AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: gasup.Program,
+ })
+
+ waste := `global CurrentApplicationAddress; keccak256; pop;`
+ buy := `itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit;
+`
+
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222)}
+ TestApp(t, strings.Repeat(waste, 5)+"int 1", ep)
+ TestApp(t, strings.Repeat(waste, 6)+"int 1", ep, "dynamic cost budget exceeded")
+ TestApp(t, strings.Repeat(waste, 6)+buy+"int 1", ep, "dynamic cost budget exceeded")
+ TestApp(t, buy+strings.Repeat(waste, 6)+"int 1", ep)
+ TestApp(t, buy+strings.Repeat(waste, 10)+"int 1", ep)
+ TestApp(t, buy+strings.Repeat(waste, 12)+"int 1", ep, "dynamic cost budget exceeded")
+ TestApp(t, buy+strings.Repeat(waste, 12)+"int 1", ep, "dynamic cost budget exceeded")
+ TestApp(t, buy+buy+strings.Repeat(waste, 12)+"int 1", ep)
+}
+
+func TestIncrementCheck(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ gasup := TestProg(t, "pushint 1", AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: gasup.Program,
+ })
+
+ source := `
+// 698, not 699, because intcblock happens first
+global OpcodeBudget; int 698; ==; assert
+// "buy" more
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit;
+global OpcodeBudget; int 1387; ==; assert
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit;
+global OpcodeBudget; int 2076; ==; assert
+int 1
+`
+
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222)}
+ TestApp(t, source, ep)
+}
+
+// TestInnerTxIDs confirms that TxIDs are available and different
+func TestInnerTxIDs(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ txid := TestProg(t, "txn TxID; log; int 1", AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: txid.Program,
+ })
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222)}
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit;
+itxn Logs 0
+
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit;
+itxn Logs 0
+
+!=
+`, ep)
+}
+
+// TestInnerGroupIDs confirms that GroupIDs are unset on size one inner groups,
+// but set and unique on non-singletons
+func TestInnerGroupIDs(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ gid := TestProg(t, "global GroupID; log; int 1", AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: gid.Program,
+ })
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222)}
+
+ // A single txn gets 0 group id
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit;
+itxn Logs 0
+global ZeroAddress
+==
+`, ep)
+
+ // A double calls gets something else
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_next
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit;
+itxn Logs 0
+global ZeroAddress
+!=
+`, ep)
+
+ // The "something else" is unique, despite two identical groups
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_next
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit;
+itxn Logs 0
+
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_next
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit;
+itxn Logs 0
+
+!=
+`, ep)
+}
+
+// TestGtixn confirms access to itxn groups
+func TestGtixn(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ two := TestProg(t, "byte 0x22; log; int 1", AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: two.Program,
+ })
+ three := TestProg(t, "byte 0x33; log; int 1", AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 333, basics.AppParams{
+ ApprovalProgram: three.Program,
+ })
+ four := TestProg(t, "byte 0x44; log; int 1", AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 444, basics.AppParams{
+ ApprovalProgram: four.Program,
+ })
+
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222), basics.AppIndex(333), basics.AppIndex(444)}
+
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_next
+int appl; itxn_field TypeEnum
+int 333; itxn_field ApplicationID
+itxn_submit;
+gitxn 0 Logs 0
+byte 0x22
+==
+assert
+
+gitxna 1 Logs 0
+byte 0x33
+==
+assert
+
+itxn_begin
+int appl; itxn_field TypeEnum
+int 444; itxn_field ApplicationID
+itxn_next
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit;
+
+gitxn 0 Logs 0
+byte 0x44
+==
+assert
+
+gitxn 1 Logs 0
+byte 0x22
+==
+assert
+
+int 1
+`, ep)
+
+ // Confirm that two singletons don't get treated as a group
+ TestApp(t, `
+itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit
+
+itxn_begin
+int appl; itxn_field TypeEnum
+int 333; itxn_field ApplicationID
+itxn_submit
+gitxn 0 Logs 0
+byte 0x33
+==
+assert
+int 1
+`, ep)
+}
+
+// TestGtxnLog confirms that gtxn can now access previous txn's Logs.
+func TestGtxnLog(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ two := TestProg(t, "byte 0x22; log; int 1", AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: two.Program,
+ })
+ three := TestProg(t, "gtxn 0 NumLogs; int 1; ==; assert; gtxna 0 Logs 0; byte 0x22; ==", AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 333, basics.AppParams{
+ ApprovalProgram: three.Program,
+ })
+
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222), basics.AppIndex(333)}
+
+ TestApp(t, `itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_next
+int appl; itxn_field TypeEnum
+int 333; itxn_field ApplicationID
+itxn_submit
+int 1
+`, ep)
+}
+
+// TestGtxnApps confirms that gtxn can now access previous txn's created app id.
+func TestGtxnApps(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ appcheck := TestProg(t, `
+gtxn 0 CreatedApplicationID; itob; log;
+gtxn 1 CreatedApplicationID; itob; log;
+int 1
+`, AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: appcheck.Program,
+ })
+
+ ops := TestProg(t, "int 1", AssemblerMaxVersion)
+
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222)}
+ TestApp(t, `itxn_begin
+int appl; itxn_field TypeEnum
+ `+fmt.Sprintf("byte 0x%s; itxn_field ApprovalProgram;", hex.EncodeToString(ops.Program))+`
+itxn_next
+int appl; itxn_field TypeEnum
+ `+fmt.Sprintf("byte 0x%s; itxn_field ApprovalProgram;", hex.EncodeToString(ops.Program))+`
+itxn_next
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit
+itxn Logs 0
+btoi
+int 5000
+==
+assert
+gitxn 2 Logs 1
+btoi
+int 5001
+==
+`, ep)
+}
+
+// TestGtxnAsa confirms that gtxn can now access previous txn's created asa id.
+func TestGtxnAsa(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ appcheck := TestProg(t, `
+gtxn 0 CreatedAssetID; itob; log;
+gtxn 1 CreatedAssetID; itob; log;
+int 1
+`, AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: appcheck.Program,
+ })
+
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222)}
+ TestApp(t, `itxn_begin
+int acfg; itxn_field TypeEnum
+itxn_next
+int acfg; itxn_field TypeEnum
+itxn_next
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit
+itxn Logs 0
+btoi
+int 5000
+==
+assert
+gitxn 2 Logs 1
+btoi
+int 5001
+==
+`, ep)
+}
+
+// TestCallerGlobals checks that a called app can see its caller.
+func TestCallerGlobals(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ globals := TestProg(t, fmt.Sprintf(`
+global CallerApplicationID
+int 888
+==
+global CallerApplicationAddress
+addr %s
+==
+&&
+`, basics.AppIndex(888).Address()), AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: globals.Program,
+ })
+
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222)}
+ TestApp(t, `itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit
+int 1
+`, ep)
+}
+
+// TestNumInnerDeep ensures that inner call limits apply to inner calls of inner
+// transactions.
+func TestNumInnerDeep(t *testing.T) {
+ pay := `
+ itxn_begin
+ int 1
+ itxn_field Amount
+ txn Accounts 0
+ itxn_field Receiver
+ int pay
+ itxn_field TypeEnum
+ itxn_submit
+`
+
+ tx := txntest.Txn{
+ Type: protocol.ApplicationCallTx,
+ ApplicationID: 888,
+ ForeignApps: []basics.AppIndex{basics.AppIndex(222)},
+ }.SignedTxnWithAD()
+ require.Equal(t, 888, int(tx.Txn.ApplicationID))
+ ledger := MakeLedger(nil)
+
+ pay3 := TestProg(t, pay+pay+pay+"int 1;", AssemblerMaxVersion).Program
+ ledger.NewApp(tx.Txn.Receiver, 222, basics.AppParams{
+ ApprovalProgram: pay3,
+ })
+
+ ledger.NewApp(tx.Txn.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 1_000_000)
+
+ callpay3 := `itxn_begin
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+itxn_submit
+`
+ txg := []transactions.SignedTxnWithAD{tx}
+ ep := NewEvalParams(txg, MakeTestProto(), &transactions.SpecialAddresses{})
+ ep.Ledger = ledger
+ TestApp(t, callpay3+"int 1", ep, "insufficient balance") // inner contract needs money
+
+ ledger.NewAccount(appAddr(222), 1_000_000)
+ TestApp(t, callpay3+"int 1", ep)
+ // Each use of callpay3 is 4 inners total, so 8 is ok. (32 allowed in test ep)
+ TestApp(t, strings.Repeat(callpay3, 8)+"int 1", ep)
+ TestApp(t, strings.Repeat(callpay3, 9)+"int 1", ep, "too many inner transactions")
+}
+
+// TestCreateAndUse checks that an ASA can be created in an inner app, and then
+// used. This was not allowed until v6, because of the strict adherence to the
+// foreign-arrays rules.
+func TestCreateAndUse(t *testing.T) {
+ axfer := `
+ itxn_begin
+ int acfg; itxn_field TypeEnum
+ int 10; itxn_field ConfigAssetTotal
+ byte "Gold"; itxn_field ConfigAssetName
+ itxn_submit
+
+ itxn_begin
+ int axfer; itxn_field TypeEnum
+ itxn CreatedAssetID; itxn_field XferAsset
+ txn Accounts 0; itxn_field AssetReceiver
+ itxn_submit
+
+ int 1
+`
+
+ // First testing use in axfer
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 4*MakeTestProto().MinTxnFee)
+ TestApp(t, axfer, ep)
+
+ ep.Proto = MakeTestProtoV(CreatedResourcesVersion - 1)
+ TestApp(t, axfer, ep, "invalid Asset reference")
+
+ balance := `
+ itxn_begin
+ int acfg; itxn_field TypeEnum
+ int 10; itxn_field ConfigAssetTotal
+ byte "Gold"; itxn_field ConfigAssetName
+ itxn_submit
+
+ // txn Sender is not opted-in, as it's the app account that made the asset
+ // At some point, we should short-circuit so this does not go to disk.
+ txn Sender
+ itxn CreatedAssetID
+ asset_holding_get AssetBalance
+ int 0
+ ==
+ assert
+ int 0
+ ==
+ assert
+
+ // App account owns all the newly made gold
+ global CurrentApplicationAddress
+ itxn CreatedAssetID
+ asset_holding_get AssetBalance
+ assert
+ int 10
+ ==
+ assert
+
+ int 1
+`
+
+ // Now test use in asset balance opcode
+ ep, tx, ledger = MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 4*MakeTestProto().MinTxnFee)
+ TestApp(t, balance, ep)
+
+ ep.Proto = MakeTestProtoV(CreatedResourcesVersion - 1)
+ TestApp(t, balance, ep, "invalid Asset reference")
+
+ appcall := `
+ itxn_begin
+ int acfg; itxn_field TypeEnum
+ int 10; itxn_field ConfigAssetTotal
+ byte "Gold"; itxn_field ConfigAssetName
+ itxn_submit
+
+ itxn_begin
+ int appl; itxn_field TypeEnum
+ int 888; itxn_field ApplicationID
+ itxn CreatedAssetID; itxn_field Assets
+ itxn_submit
+
+ int 1
+`
+
+ // Now as ForeignAsset
+ ep, tx, ledger = MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 4*MakeTestProto().MinTxnFee)
+ // It gets passed the Assets setting
+ TestApp(t, appcall, ep, "attempt to self-call")
+
+ // Appcall is isn't allowed pre-CreatedResourcesVersion, because same
+ // version allowed inner app calls
+ // ep.Proto = MakeTestProtoV(CreatedResourcesVersion - 1)
+ // TestApp(t, appcall, ep, "invalid Asset reference")
+}
+
+// main wraps up some TEAL source in a header and footer so that it is
+// an app that does nothing at create time, but otherwise runs source,
+// then approves, if the source avoids panicing and leaves the stack
+// empty.
+func main(source string) string {
+ return fmt.Sprintf(`txn ApplicationID
+ bz end
+ %s
+ end: int 1`, source)
+}
+
+func hexProgram(t *testing.T, source string) string {
+ return "0x" + hex.EncodeToString(TestProg(t, source, AssemblerMaxVersion).Program)
+}
+
+// TestCreateAndUseApp checks that an app can be created in an inner txn, and then
+// the address for it can be looked up.
+func TestCreateUseApp(t *testing.T) {
+ pay5back := main(`
+itxn_begin
+int pay; itxn_field TypeEnum
+txn Sender; itxn_field Receiver
+int 5; itxn_field Amount
+itxn_submit
+int 1
+`)
+
+ createAndUse := `
+ itxn_begin
+ int appl; itxn_field TypeEnum
+ byte ` + hexProgram(t, pay5back) + `; itxn_field ApprovalProgram;
+ itxn_submit
+
+ itxn CreatedApplicationID; app_params_get AppAddress; assert
+ addr ` + appAddr(5000).String() + `
+ ==
+`
+
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 1*MakeTestProto().MinTxnFee)
+ TestApp(t, createAndUse, ep)
+ // Again, can't test if this (properly) fails in previous version, because
+ // we can't even create apps this way in previous version.
+}
+
+// TestCreateAndPay checks that an app can be created in an inner app, and then
+// a pay can be done to the app's account. This was not allowed until v6,
+// because of the strict adherence to the foreign-accounts rules.
+func TestCreateAndPay(t *testing.T) {
+ pay5back := main(`
+itxn_begin
+int pay; itxn_field TypeEnum
+txn Sender; itxn_field Receiver
+int 5; itxn_field Amount
+itxn_submit
+int 1
+`)
+
+ createAndPay := `
+ itxn_begin
+ int appl; itxn_field TypeEnum
+ ` + fmt.Sprintf("byte %s; itxn_field ApprovalProgram;", hexProgram(t, pay5back)) + `
+ itxn_submit
+
+ itxn_begin
+ int pay; itxn_field TypeEnum
+ itxn CreatedApplicationID; app_params_get AppAddress; assert; itxn_field Receiver
+ int 10; itxn_field Amount
+ itxn_submit
+
+ int 1
+`
+
+ ep, tx, ledger := MakeSampleEnv()
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 10*MakeTestProto().MinTxnFee)
+ TestApp(t, createAndPay, ep)
+
+ // This test is impossible because CreatedResourcesVersion is also when
+ // inner txns could make apps.
+ // ep.Proto = MakeTestProtoV(CreatedResourcesVersion - 1)
+ // TestApp(t, createAndPay, ep, "invalid Address reference")
+}
+
+// TestInnerGaid ensures there's no confusion over the tracking of ids
+// across multiple inner transaction groups
+func TestInnerGaid(t *testing.T) {
+ ep, tx, ledger := MakeSampleEnv()
+ ep.Proto.MaxInnerTransactions = 100
+ // App to log the aid of slot[apparg[0]]
+ logGaid := TestProg(t, `txn ApplicationArgs 0; btoi; gaids; itob; log; int 1`, AssemblerMaxVersion)
+ ledger.NewApp(tx.Receiver, 222, basics.AppParams{
+ ApprovalProgram: logGaid.Program,
+ })
+
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ ledger.NewAccount(appAddr(888), 50_000)
+ tx.ForeignApps = []basics.AppIndex{basics.AppIndex(222)}
+ TestApp(t, `itxn_begin
+int acfg; itxn_field TypeEnum
+itxn_next
+int pay; itxn_field TypeEnum
+txn Sender; itxn_field Receiver
+itxn_next
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+int 0; itob; itxn_field ApplicationArgs
+itxn_submit
+itxn Logs 0
+btoi
+int 5000
+==
+assert
+
+// Swap the pay and acfg, ensure gaid 1 works instead
+itxn_begin
+int pay; itxn_field TypeEnum
+txn Sender; itxn_field Receiver
+itxn_next
+int acfg; itxn_field TypeEnum
+itxn_next
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+int 1; itob; itxn_field ApplicationArgs
+itxn_submit
+itxn Logs 0
+btoi
+int 5001
+==
+assert
+
+
+int 1
+`, ep)
+
+ // Nearly identical, but ensures that gaid 0 FAILS in the second group
+ TestApp(t, `itxn_begin
+int acfg; itxn_field TypeEnum
+itxn_next
+int pay; itxn_field TypeEnum
+txn Sender; itxn_field Receiver
+itxn_next
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+int 0; itob; itxn_field ApplicationArgs
+itxn_submit
+itxn Logs 0
+btoi
+int 5000
+==
+assert
+
+// Swap the pay and acfg, ensure gaid 1 works instead
+itxn_begin
+int pay; itxn_field TypeEnum
+txn Sender; itxn_field Receiver
+itxn_next
+int acfg; itxn_field TypeEnum
+itxn_next
+int appl; itxn_field TypeEnum
+int 222; itxn_field ApplicationID
+int 0; itob; itxn_field ApplicationArgs
+itxn_submit
+itxn Logs 0
+btoi
+int 5001
+==
+assert
+
+
+int 1
+`, ep, "assert failed")
+
+}
diff --git a/data/transactions/logic/evalCrypto_test.go b/data/transactions/logic/evalCrypto_test.go
index 4f6a0b340..e0cb98bac 100644
--- a/data/transactions/logic/evalCrypto_test.go
+++ b/data/transactions/logic/evalCrypto_test.go
@@ -24,7 +24,6 @@ import (
"encoding/hex"
"fmt"
"math/big"
- "strings"
"testing"
"github.com/stretchr/testify/require"
@@ -89,11 +88,10 @@ func TestEd25519verify(t *testing.T) {
for v := uint64(1); v <= AssemblerMaxVersion; v++ {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
- ops, err := AssembleStringWithVersion(fmt.Sprintf(`arg 0
+ ops := testProg(t, fmt.Sprintf(`arg 0
arg 1
addr %s
ed25519verify`, pkStr), v)
- require.NoError(t, err)
sig := c.Sign(Msg{
ProgramHash: crypto.HashObj(Program(ops.Program)),
Data: data[:],
@@ -101,32 +99,18 @@ ed25519verify`, pkStr), v)
var txn transactions.SignedTxn
txn.Lsig.Logic = ops.Program
txn.Lsig.Args = [][]byte{data[:], sig[:]}
- sb := strings.Builder{}
- pass, err := Eval(ops.Program, defaultEvalParams(&sb, &txn))
- if !pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.True(t, pass)
- require.NoError(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn))
// short sig will fail
txn.Lsig.Args[1] = sig[1:]
- pass, err = Eval(ops.Program, defaultEvalParams(nil, &txn))
- require.False(t, pass)
- require.Error(t, err)
- isNotPanic(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn), "invalid signature")
// flip a bit and it should not pass
msg1 := "52fdfc072182654f163f5f0f9a621d729566c74d0aa413bf009c9800418c19cd"
data1, err := hex.DecodeString(msg1)
require.NoError(t, err)
txn.Lsig.Args = [][]byte{data1, sig[:]}
- sb1 := strings.Builder{}
- pass1, err := Eval(ops.Program, defaultEvalParams(&sb1, &txn))
- require.False(t, pass1)
- require.NoError(t, err)
- isNotPanic(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn), "REJECT")
})
}
}
@@ -334,7 +318,7 @@ ecdsa_verify Secp256k1`, hex.EncodeToString(r), hex.EncodeToString(s), hex.Encod
ops := testProg(t, source, 5)
var txn transactions.SignedTxn
txn.Lsig.Logic = ops.Program
- pass, err := Eval(ops.Program, defaultEvalParamsWithVersion(nil, &txn, 5))
+ pass, err := EvalSignature(0, defaultEvalParamsWithVersion(&txn, 5))
require.NoError(t, err)
require.True(t, pass)
}
@@ -427,12 +411,11 @@ ed25519verify`, pkStr), AssemblerMaxVersion)
var txn transactions.SignedTxn
txn.Lsig.Logic = programs[i]
txn.Lsig.Args = [][]byte{data[i][:], signatures[i][:]}
- sb := strings.Builder{}
- ep := defaultEvalParams(&sb, &txn)
- pass, err := Eval(programs[i], ep)
+ ep := defaultEvalParams(&txn)
+ pass, err := EvalSignature(0, ep)
if !pass {
b.Log(hex.EncodeToString(programs[i]))
- b.Log(sb.String())
+ b.Log(ep.Trace.String())
}
if err != nil {
require.NoError(b, err)
@@ -489,12 +472,11 @@ func benchmarkEcdsa(b *testing.B, source string) {
var txn transactions.SignedTxn
txn.Lsig.Logic = data[i].programs
txn.Lsig.Args = [][]byte{data[i].msg[:], data[i].r, data[i].s, data[i].x, data[i].y, data[i].pk, {uint8(data[i].v)}}
- sb := strings.Builder{}
- ep := defaultEvalParams(&sb, &txn)
- pass, err := Eval(data[i].programs, ep)
+ ep := defaultEvalParams(&txn)
+ pass, err := EvalSignature(0, ep)
if !pass {
b.Log(hex.EncodeToString(data[i].programs))
- b.Log(sb.String())
+ b.Log(ep.Trace.String())
}
if err != nil {
require.NoError(b, err)
diff --git a/data/transactions/logic/evalStateful_test.go b/data/transactions/logic/evalStateful_test.go
index ee940026b..91dff7192 100644
--- a/data/transactions/logic/evalStateful_test.go
+++ b/data/transactions/logic/evalStateful_test.go
@@ -22,11 +22,11 @@ import (
"strings"
"testing"
+ "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/algorand/go-algorand/data/basics"
"github.com/algorand/go-algorand/data/transactions"
- "github.com/algorand/go-algorand/data/transactions/logictest"
"github.com/algorand/go-algorand/protocol"
"github.com/algorand/go-algorand/test/partitiontest"
)
@@ -44,22 +44,21 @@ func makeApp(li uint64, lb uint64, gi uint64, gb uint64) basics.AppParams {
}
}
-func makeSampleEnv() (EvalParams, *logictest.Ledger) {
+func makeSampleEnv() (*EvalParams, *transactions.Transaction, *Ledger) {
return makeSampleEnvWithVersion(LogicVersion)
}
-func makeSampleEnvWithVersion(version uint64) (EvalParams, *logictest.Ledger) {
- txn := makeSampleTxn()
- ep := defaultEvalParamsWithVersion(nil, &txn, version)
- ep.TxnGroup = makeSampleTxnGroup(txn)
- ledger := logictest.MakeLedger(map[basics.Address]uint64{})
+func makeSampleEnvWithVersion(version uint64) (*EvalParams, *transactions.Transaction, *Ledger) {
+ ep := defaultEvalParamsWithVersion(nil, version)
+ ep.TxnGroup = transactions.WrapSignedTxnsWithAD(makeSampleTxnGroup(makeSampleTxn()))
+ ledger := MakeLedger(map[basics.Address]uint64{})
ep.Ledger = ledger
- return ep, ledger
+ return ep, &ep.TxnGroup[0].Txn, ledger
}
-func makeOldAndNewEnv(version uint64) (EvalParams, EvalParams, *logictest.Ledger) {
- new, sharedLedger := makeSampleEnv()
- old, _ := makeSampleEnvWithVersion(version)
+func makeOldAndNewEnv(version uint64) (*EvalParams, *EvalParams, *Ledger) {
+ new, _, sharedLedger := makeSampleEnv()
+ old, _, _ := makeSampleEnvWithVersion(version)
old.Ledger = sharedLedger
return old, new, sharedLedger
}
@@ -190,33 +189,19 @@ pop
bytec_0
log
`
- type desc struct {
- source string
- eval func([]byte, EvalParams) (bool, error)
- check func([]byte, EvalParams) error
- }
- tests := map[runMode]desc{
- runModeSignature: {
- source: opcodesRunModeAny + opcodesRunModeSignature,
- eval: func(program []byte, ep EvalParams) (bool, error) { return Eval(program, ep) },
- check: func(program []byte, ep EvalParams) error { return Check(program, ep) },
- },
- runModeApplication: {
- source: opcodesRunModeAny + opcodesRunModeApplication,
- eval: func(program []byte, ep EvalParams) (bool, error) { return EvalStateful(program, ep) },
- check: func(program []byte, ep EvalParams) error { return CheckStateful(program, ep) },
- },
+ tests := map[runMode]string{
+ runModeSignature: opcodesRunModeAny + opcodesRunModeSignature,
+ runModeApplication: opcodesRunModeAny + opcodesRunModeApplication,
}
- txn := makeSampleTxn()
- txgroup := makeSampleTxnGroup(txn)
- txn.Lsig.Args = [][]byte{
- txn.Txn.Sender[:],
- txn.Txn.Receiver[:],
- txn.Txn.CloseRemainderTo[:],
- txn.Txn.VotePK[:],
- txn.Txn.SelectionPK[:],
- txn.Txn.Note,
+ ep, tx, ledger := makeSampleEnv()
+ ep.TxnGroup[0].Lsig.Args = [][]byte{
+ tx.Sender[:],
+ tx.Receiver[:],
+ tx.CloseRemainderTo[:],
+ tx.VotePK[:],
+ tx.SelectionPK[:],
+ tx.Note,
}
params := basics.AssetParams{
Total: 1000,
@@ -225,57 +210,35 @@ log
UnitName: "ALGO",
AssetName: "",
URL: string(protocol.PaymentTx),
- Manager: txn.Txn.Sender,
- Reserve: txn.Txn.Receiver,
- Freeze: txn.Txn.Receiver,
- Clawback: txn.Txn.Receiver,
+ Manager: tx.Sender,
+ Reserve: tx.Receiver,
+ Freeze: tx.Receiver,
+ Clawback: tx.Receiver,
}
algoValue := basics.TealValue{Type: basics.TealUintType, Uint: 0x77}
- ledger := logictest.MakeLedger(
- map[basics.Address]uint64{
- txn.Txn.Sender: 1,
- },
- )
- ledger.NewApp(txn.Txn.Sender, 100, basics.AppParams{})
- ledger.NewLocal(txn.Txn.Sender, 100, "ALGO", algoValue)
- ledger.NewAsset(txn.Txn.Sender, 5, params)
+ ledger.NewAccount(tx.Sender, 1)
+ ledger.NewApp(tx.Sender, 100, basics.AppParams{})
+ ledger.NewLocals(tx.Sender, 100)
+ ledger.NewLocal(tx.Sender, 100, "ALGO", algoValue)
+ ledger.NewAsset(tx.Sender, 5, params)
for mode, test := range tests {
t.Run(fmt.Sprintf("opcodes_mode=%d", mode), func(t *testing.T) {
- ops := testProg(t, test.source, AssemblerMaxVersion)
- sb := strings.Builder{}
- ep := defaultEvalParams(&sb, &txn)
- ep.TxnGroup = txgroup
- ep.Ledger = ledger
- ep.Txn.Txn.ApplicationID = 100
- ep.Txn.Txn.ForeignAssets = []basics.AssetIndex{5} // needed since v4
-
- err := test.check(ops.Program, ep)
- require.NoError(t, err)
- _, err = test.eval(ops.Program, ep)
- if err != nil {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ ep.TxnGroup[0].Txn.ApplicationID = 100
+ ep.TxnGroup[0].Txn.ForeignAssets = []basics.AssetIndex{5} // needed since v4
+ if mode == runModeSignature {
+ testLogic(t, test, AssemblerMaxVersion, ep)
+ } else {
+ testApp(t, test, ep)
}
- require.NoError(t, err)
})
}
// check err opcode work in both modes
- for mode, test := range tests {
- t.Run(fmt.Sprintf("err_mode=%d", mode), func(t *testing.T) {
- source := "err"
- ops, err := AssembleStringWithVersion(source, AssemblerMaxVersion)
- require.NoError(t, err)
- ep := defaultEvalParams(nil, nil)
- err = test.check(ops.Program, ep)
- require.NoError(t, err)
- _, err = test.eval(ops.Program, ep)
- require.Error(t, err)
- require.NotContains(t, err.Error(), "not allowed in current mode")
- require.Contains(t, err.Error(), "err opcode")
- })
- }
+ source := "err"
+ testLogic(t, source, AssemblerMaxVersion, defaultEvalParams(nil), "encountered err")
+ testApp(t, source, defaultEvalParams(nil), "encountered err")
+ // require.NotContains(t, err.Error(), "not allowed in current mode")
// check that ed25519verify and arg is not allowed in stateful mode between v2-v4
disallowedV4 := []string{
@@ -288,12 +251,8 @@ log
}
for _, source := range disallowedV4 {
ops := testProg(t, source, 4)
- ep := defaultEvalParams(nil, nil)
- err := CheckStateful(ops.Program, ep)
- require.Error(t, err)
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "not allowed in current mode")
+ testAppBytes(t, ops.Program, defaultEvalParams(nil),
+ "not allowed in current mode", "not allowed in current mode")
}
// check that arg is not allowed in stateful mode beyond v5
@@ -306,12 +265,8 @@ log
}
for _, source := range disallowed {
ops := testProg(t, source, AssemblerMaxVersion)
- ep := defaultEvalParams(nil, nil)
- err := CheckStateful(ops.Program, ep)
- require.Error(t, err)
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "not allowed in current mode")
+ testAppBytes(t, ops.Program, defaultEvalParams(nil),
+ "not allowed in current mode", "not allowed in current mode")
}
// check stateful opcodes are not allowed in stateless mode
@@ -333,13 +288,8 @@ log
}
for _, source := range statefulOpcodeCalls {
- ops := testProg(t, source, AssemblerMaxVersion)
- ep := defaultEvalParams(nil, nil)
- err := Check(ops.Program, ep)
- require.Error(t, err)
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "not allowed in current mode")
+ testLogic(t, source, AssemblerMaxVersion, defaultEvalParams(nil),
+ "not allowed in current mode", "not allowed in current mode")
}
require.Equal(t, runMode(1), runModeSignature)
@@ -353,9 +303,9 @@ func TestBalance(t *testing.T) {
t.Parallel()
- ep, ledger := makeSampleEnv()
+ ep, tx, ledger := makeSampleEnv()
text := "int 2; balance; int 177; =="
- ledger.NewAccount(ep.Txn.Txn.Receiver, 177)
+ ledger.NewAccount(tx.Receiver, 177)
testApp(t, text, ep, "invalid Account reference")
text = `int 1; balance; int 177; ==`
@@ -363,7 +313,7 @@ func TestBalance(t *testing.T) {
text = `txn Accounts 1; balance; int 177; ==;`
// won't assemble in old version teal
- testProg(t, text, directRefEnabledVersion-1, expect{2, "balance arg 0 wanted type uint64..."})
+ testProg(t, text, directRefEnabledVersion-1, Expect{2, "balance arg 0 wanted type uint64..."})
// but legal after that
testApp(t, text, ep)
@@ -373,48 +323,112 @@ func TestBalance(t *testing.T) {
ledger.NewAccount(addr, 13)
testApp(t, text, ep, "assert failed")
- ledger.NewAccount(ep.Txn.Txn.Sender, 13)
+ ledger.NewAccount(tx.Sender, 13)
testApp(t, text, ep)
}
-func testApp(t *testing.T, program string, ep EvalParams, problems ...string) transactions.EvalDelta {
+func testApps(t *testing.T, programs []string, txgroup []transactions.SignedTxn, version uint64, ledger LedgerForLogic,
+ expected ...Expect) {
+ t.Helper()
+ codes := make([][]byte, len(programs))
+ for i, program := range programs {
+ if program != "" {
+ codes[i] = testProg(t, program, version).Program
+ }
+ }
+ ep := NewEvalParams(transactions.WrapSignedTxnsWithAD(txgroup), makeTestProtoV(version), &transactions.SpecialAddresses{})
+ ep.Ledger = ledger
+ testAppsBytes(t, codes, ep, expected...)
+}
+
+func testAppsBytes(t *testing.T, programs [][]byte, ep *EvalParams, expected ...Expect) {
+ t.Helper()
+ require.Equal(t, len(programs), len(ep.TxnGroup))
+ for i := range ep.TxnGroup {
+ if programs[i] != nil {
+ if len(expected) > 0 && expected[0].l == i {
+ testAppFull(t, programs[i], i, basics.AppIndex(888), ep, expected[0].s)
+ break // Stop after first failure
+ } else {
+ testAppFull(t, programs[i], i, basics.AppIndex(888), ep)
+ }
+ }
+ }
+}
+
+func testApp(t *testing.T, program string, ep *EvalParams, problems ...string) transactions.EvalDelta {
t.Helper()
ops := testProg(t, program, ep.Proto.LogicSigVersion)
- err := CheckStateful(ops.Program, ep)
- require.NoError(t, err)
+ return testAppBytes(t, ops.Program, ep, problems...)
+}
- // we only use this to test stateful apps. While, I suppose
- // it's *legal* to have an app with no stateful ops, this
- // convenience routine can assume it, and check it.
- pass, err := Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "not allowed in current mode")
- require.False(t, pass)
+func testAppBytes(t *testing.T, program []byte, ep *EvalParams, problems ...string) transactions.EvalDelta {
+ t.Helper()
+ ep.reset()
+ aid := ep.TxnGroup[0].Txn.ApplicationID
+ if aid == basics.AppIndex(0) {
+ aid = basics.AppIndex(888)
+ }
+ return testAppFull(t, program, 0, aid, ep, problems...)
+}
+
+// testAppFull gives a lot of control to caller - in particular, notice that
+// ep.reset() is in testAppBytes, not here. This means that ADs in the ep are
+// not cleared, so repeated use of a single ep is probably not a good idea
+// unless you are *intending* to see how ep is modified as you go.
+func testAppFull(t *testing.T, program []byte, gi int, aid basics.AppIndex, ep *EvalParams, problems ...string) transactions.EvalDelta {
+ t.Helper()
+
+ var checkProblem string
+ var evalProblem string
+ switch len(problems) {
+ case 2:
+ checkProblem = problems[0]
+ evalProblem = problems[1]
+ case 1:
+ evalProblem = problems[0]
+ case 0:
+ default:
+ require.Fail(t, "Misused testApp: %d problems", len(problems))
+ }
sb := &strings.Builder{}
ep.Trace = sb
- pass, err = EvalStateful(ops.Program, ep)
- if len(problems) == 0 {
+
+ err := CheckContract(program, ep)
+ if checkProblem == "" {
require.NoError(t, err, sb.String())
- require.True(t, pass, sb.String())
- delta, err := ep.Ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
- return delta
+ } else {
+ require.Error(t, err, "Check\n%s\nExpected: %v", sb, checkProblem)
+ require.Contains(t, err.Error(), checkProblem, sb.String())
}
- require.Error(t, err, sb.String())
- for _, problem := range problems {
- require.Contains(t, err.Error(), problem)
+ // We continue on to check Eval() of things that failed Check() because it's
+ // a nice confirmation that Check() is usually stricter than Eval(). This
+ // may mean that the problems argument is often duplicated, but this seems
+ // the best way to be concise about all sorts of tests.
+
+ if ep.Ledger == nil {
+ ep.Ledger = MakeLedger(nil)
}
- if ep.Ledger != nil {
- delta, err := ep.Ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
- require.Empty(t, delta.GlobalDelta)
- require.Empty(t, delta.LocalDeltas)
- require.Empty(t, delta.Logs)
+
+ pass, err := EvalApp(program, gi, aid, ep)
+ delta := ep.TxnGroup[gi].EvalDelta
+ if evalProblem == "" {
+ require.NoError(t, err, "Eval%s\nExpected: PASS", sb)
+ require.True(t, pass, "Eval%s\nExpected: PASS", sb)
return delta
}
- return transactions.EvalDelta{}
+
+ // There is an evalProblem to check. REJECT is special and only means that
+ // the app didn't accept. Maybe it's an error, maybe it's just !pass.
+ if evalProblem == "REJECT" {
+ require.True(t, err != nil || !pass, "Eval%s\nExpected: REJECT", sb)
+ } else {
+ require.Error(t, err, "Eval\n%s\nExpected: %v", sb, evalProblem)
+ require.Contains(t, err.Error(), evalProblem)
+ }
+ return delta
}
func TestMinBalance(t *testing.T) {
@@ -422,35 +436,36 @@ func TestMinBalance(t *testing.T) {
t.Parallel()
- ep, ledger := makeSampleEnv()
+ ep, tx, ledger := makeSampleEnv()
- ledger.NewAccount(ep.Txn.Txn.Sender, 234)
- ledger.NewAccount(ep.Txn.Txn.Receiver, 123)
+ ledger.NewAccount(tx.Sender, 234)
+ ledger.NewAccount(tx.Receiver, 123)
testApp(t, "int 0; min_balance; int 1001; ==", ep)
// Sender makes an asset, min balance goes up
- ledger.NewAsset(ep.Txn.Txn.Sender, 7, basics.AssetParams{Total: 1000})
+ ledger.NewAsset(tx.Sender, 7, basics.AssetParams{Total: 1000})
testApp(t, "int 0; min_balance; int 2002; ==", ep)
schemas := makeApp(1, 2, 3, 4)
- ledger.NewApp(ep.Txn.Txn.Sender, 77, schemas)
+ ledger.NewApp(tx.Sender, 77, schemas)
+ ledger.NewLocals(tx.Sender, 77)
// create + optin + 10 schema base + 4 ints + 6 bytes (local
- // and global count b/c NewApp opts the creator in)
+ // and global count b/c NewLocals opts the creator in)
minb := 2*1002 + 10*1003 + 4*1004 + 6*1005
testApp(t, fmt.Sprintf("int 0; min_balance; int %d; ==", 2002+minb), ep)
// request extra program pages, min balance increase
withepp := makeApp(1, 2, 3, 4)
withepp.ExtraProgramPages = 2
- ledger.NewApp(ep.Txn.Txn.Sender, 77, withepp)
+ ledger.NewApp(tx.Sender, 77, withepp)
minb += 2 * 1002
testApp(t, fmt.Sprintf("int 0; min_balance; int %d; ==", 2002+minb), ep)
testApp(t, "int 1; min_balance; int 1001; ==", ep) // 1 == Accounts[0]
testProg(t, "txn Accounts 1; min_balance; int 1001; ==", directRefEnabledVersion-1,
- expect{2, "min_balance arg 0 wanted type uint64..."})
+ Expect{2, "min_balance arg 0 wanted type uint64..."})
testProg(t, "txn Accounts 1; min_balance; int 1001; ==", directRefEnabledVersion)
testApp(t, "txn Accounts 1; min_balance; int 1001; ==", ep) // 1 == Accounts[0]
// Receiver opts in
- ledger.NewHolding(ep.Txn.Txn.Receiver, 7, 1, true)
+ ledger.NewHolding(tx.Receiver, 7, 1, true)
testApp(t, "int 1; min_balance; int 2002; ==", ep) // 1 == Accounts[0]
testApp(t, "int 2; min_balance; int 1001; ==", ep, "invalid Account reference 2")
@@ -464,15 +479,12 @@ func TestAppCheckOptedIn(t *testing.T) {
txn := makeSampleTxn()
txgroup := makeSampleTxnGroup(txn)
- now := defaultEvalParams(nil, nil)
- now.Txn = &txn
- now.TxnGroup = txgroup
- pre := defaultEvalParamsWithVersion(nil, nil, directRefEnabledVersion-1)
- pre.Txn = &txn
- pre.TxnGroup = txgroup
- testApp(t, "int 2; int 100; app_opted_in; int 1; ==", now, "ledger not available")
-
- ledger := logictest.MakeLedger(
+ now := defaultEvalParams(&txn)
+ now.TxnGroup = transactions.WrapSignedTxnsWithAD(txgroup)
+ pre := defaultEvalParamsWithVersion(&txn, directRefEnabledVersion-1)
+ pre.TxnGroup = transactions.WrapSignedTxnsWithAD(txgroup)
+
+ ledger := MakeLedger(
map[basics.Address]uint64{
txn.Txn.Receiver: 1,
txn.Txn.Sender: 1,
@@ -491,16 +503,16 @@ func TestAppCheckOptedIn(t *testing.T) {
testApp(t, "int 0; int 100; app_opted_in; int 0; ==", now)
// Receiver opted in
- ledger.NewApp(txn.Txn.Receiver, 100, basics.AppParams{})
+ ledger.NewLocals(txn.Txn.Receiver, 100)
testApp(t, "int 1; int 100; app_opted_in; int 1; ==", now)
testApp(t, "int 1; int 2; app_opted_in; int 1; ==", now)
testApp(t, "int 1; int 2; app_opted_in; int 0; ==", pre) // in pre, int 2 is an actual app id
testApp(t, "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui01\"; int 2; app_opted_in; int 1; ==", now)
testProg(t, "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui01\"; int 2; app_opted_in; int 1; ==", directRefEnabledVersion-1,
- expect{3, "app_opted_in arg 0 wanted type uint64..."})
+ Expect{3, "app_opted_in arg 0 wanted type uint64..."})
// Sender opted in
- ledger.NewApp(txn.Txn.Sender, 100, basics.AppParams{})
+ ledger.NewLocals(txn.Txn.Sender, 100)
testApp(t, "int 0; int 100; app_opted_in; int 1; ==", now)
}
@@ -524,7 +536,7 @@ int 1
==`
pre, now, ledger := makeOldAndNewEnv(directRefEnabledVersion - 1)
- ledger.NewAccount(now.Txn.Txn.Receiver, 1)
+ ledger.NewAccount(now.TxnGroup[0].Txn.Receiver, 1)
testApp(t, text, now, "invalid Account reference")
text = `int 1 // account idx
@@ -543,11 +555,12 @@ int 1`
testApp(t, text, now, "no app for account")
// Make a different app (not 100)
- ledger.NewApp(now.Txn.Txn.Receiver, 9999, basics.AppParams{})
+ ledger.NewApp(now.TxnGroup[0].Txn.Receiver, 9999, basics.AppParams{})
testApp(t, text, now, "no app for account")
// create the app and check the value from ApplicationArgs[0] (protocol.PaymentTx) does not exist
- ledger.NewApp(now.Txn.Txn.Receiver, 100, basics.AppParams{})
+ ledger.NewApp(now.TxnGroup[0].Txn.Receiver, 100, basics.AppParams{})
+ ledger.NewLocals(now.TxnGroup[0].Txn.Receiver, 100)
testApp(t, text, now)
text = `int 1 // account idx
@@ -559,17 +572,17 @@ err
exist:
byte 0x414c474f
==`
- ledger.NewLocal(now.Txn.Txn.Receiver, 100, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
+ ledger.NewLocal(now.TxnGroup[0].Txn.Receiver, 100, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
testApp(t, text, now)
testApp(t, strings.Replace(text, "int 1 // account idx", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui01\"", -1), now)
testProg(t, strings.Replace(text, "int 1 // account idx", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui01\"", -1), directRefEnabledVersion-1,
- expect{4, "app_local_get_ex arg 0 wanted type uint64..."})
+ Expect{4, "app_local_get_ex arg 0 wanted type uint64..."})
testApp(t, strings.Replace(text, "int 100 // app id", "int 2", -1), now)
// Next we're testing if the use of the current app's id works
// as a direct reference. The error is because the sender
// account is not opted into 123.
- ledger.NewApp(now.Txn.Txn.RekeyTo, 123, basics.AppParams{})
+ now.TxnGroup[0].Txn.ApplicationID = 123
testApp(t, strings.Replace(text, "int 100 // app id", "int 123", -1), now, "no app for account")
testApp(t, strings.Replace(text, "int 100 // app id", "int 2", -1), pre, "no app for account")
testApp(t, strings.Replace(text, "int 100 // app id", "int 9", -1), now, "invalid App reference 9")
@@ -577,7 +590,8 @@ byte 0x414c474f
"no such address")
// check special case account idx == 0 => sender
- ledger.NewApp(now.Txn.Txn.Sender, 100, basics.AppParams{})
+ ledger.NewApp(now.TxnGroup[0].Txn.Sender, 100, basics.AppParams{})
+ ledger.NewLocals(now.TxnGroup[0].Txn.Sender, 100)
text = `int 0 // account idx
int 100 // app id
txn ApplicationArgs 0
@@ -588,15 +602,15 @@ exist:
byte 0x414c474f
==`
- ledger.NewLocal(now.Txn.Txn.Sender, 100, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
+ ledger.NewLocal(now.TxnGroup[0].Txn.Sender, 100, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
testApp(t, text, now)
testApp(t, strings.Replace(text, "int 0 // account idx", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00\"", -1), now)
testApp(t, strings.Replace(text, "int 0 // account idx", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui02\"", -1), now,
"invalid Account reference")
// check reading state of other app
- ledger.NewApp(now.Txn.Txn.Sender, 56, basics.AppParams{})
- ledger.NewApp(now.Txn.Txn.Sender, 100, basics.AppParams{})
+ ledger.NewApp(now.TxnGroup[0].Txn.Sender, 56, basics.AppParams{})
+ ledger.NewApp(now.TxnGroup[0].Txn.Sender, 100, basics.AppParams{})
text = `int 0 // account idx
int 56 // app id
txn ApplicationArgs 0
@@ -607,7 +621,8 @@ exist:
byte 0x414c474f
==`
- ledger.NewLocal(now.Txn.Txn.Sender, 56, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
+ ledger.NewLocals(now.TxnGroup[0].Txn.Sender, 56)
+ ledger.NewLocal(now.TxnGroup[0].Txn.Sender, 56, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
testApp(t, text, now)
// check app_local_get
@@ -617,11 +632,12 @@ app_local_get
byte 0x414c474f
==`
- ledger.NewLocal(now.Txn.Txn.Sender, 100, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
+ ledger.NewLocal(now.TxnGroup[0].Txn.Sender, 100, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
+ now.TxnGroup[0].Txn.ApplicationID = 100
testApp(t, text, now)
testApp(t, strings.Replace(text, "int 0 // account idx", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00\"", -1), now)
testProg(t, strings.Replace(text, "int 0 // account idx", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00\"", -1), directRefEnabledVersion-1,
- expect{3, "app_local_get arg 0 wanted type uint64..."})
+ Expect{3, "app_local_get arg 0 wanted type uint64..."})
testApp(t, strings.Replace(text, "int 0 // account idx", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui01\"", -1), now)
testApp(t, strings.Replace(text, "int 0 // account idx", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui02\"", -1), now,
"invalid Account reference")
@@ -633,7 +649,7 @@ app_local_get
int 0
==`
- ledger.NewLocal(now.Txn.Txn.Sender, 100, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
+ ledger.NewLocal(now.TxnGroup[0].Txn.Sender, 100, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
testApp(t, text, now)
}
@@ -666,14 +682,14 @@ byte 0x414c474f
&&
`
pre, now, ledger := makeOldAndNewEnv(directRefEnabledVersion - 1)
- ledger.NewAccount(now.Txn.Txn.Sender, 1)
+ ledger.NewAccount(now.TxnGroup[0].Txn.Sender, 1)
- now.Txn.Txn.ApplicationID = 100
- now.Txn.Txn.ForeignApps = []basics.AppIndex{now.Txn.Txn.ApplicationID}
+ now.TxnGroup[0].Txn.ApplicationID = 100
+ now.TxnGroup[0].Txn.ForeignApps = []basics.AppIndex{now.TxnGroup[0].Txn.ApplicationID}
testApp(t, text, now, "no such app")
// create the app and check the value from ApplicationArgs[0] (protocol.PaymentTx) does not exist
- ledger.NewApp(now.Txn.Txn.Sender, 100, basics.AppParams{})
+ ledger.NewApp(now.TxnGroup[0].Txn.Sender, 100, basics.AppParams{})
testApp(t, text, now, "err opcode")
@@ -692,7 +708,8 @@ byte 0x414c474f
// check app_global_get default value
text = "byte 0x414c474f55; app_global_get; int 0; =="
- ledger.NewLocal(now.Txn.Txn.Sender, 100, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
+ ledger.NewLocals(now.TxnGroup[0].Txn.Sender, 100)
+ ledger.NewLocal(now.TxnGroup[0].Txn.Sender, 100, string(protocol.PaymentTx), basics.TealValue{Type: basics.TealBytesType, Bytes: "ALGO"})
testApp(t, text, now)
text = `
@@ -711,15 +728,17 @@ int 4141
// check that even during application creation (Txn.ApplicationID == 0)
// we will use the the kvCow if the exact application ID (100) is
// specified in the transaction
- now.Txn.Txn.ApplicationID = 0
- now.Txn.Txn.ForeignApps = []basics.AppIndex{100}
- testApp(t, text, now)
+ now.TxnGroup[0].Txn.ApplicationID = 0
+ now.TxnGroup[0].Txn.ForeignApps = []basics.AppIndex{100}
+
+ testAppFull(t, testProg(t, text, LogicVersion).Program, 0, 100, now)
// Direct reference to the current app also works
- ledger.NewApp(now.Txn.Txn.Receiver, 100, basics.AppParams{})
- now.Txn.Txn.ForeignApps = []basics.AppIndex{}
- testApp(t, strings.Replace(text, "int 1 // ForeignApps index", "int 100", -1), now)
- testApp(t, strings.Replace(text, "int 1 // ForeignApps index", "global CurrentApplicationID", -1), now)
+ now.TxnGroup[0].Txn.ForeignApps = []basics.AppIndex{}
+ testAppFull(t, testProg(t, strings.Replace(text, "int 1 // ForeignApps index", "int 100", -1), LogicVersion).Program,
+ 0, 100, now)
+ testAppFull(t, testProg(t, strings.Replace(text, "int 1 // ForeignApps index", "global CurrentApplicationID", -1), LogicVersion).Program,
+ 0, 100, now)
}
const assetsTestTemplate = `int 0//account
@@ -848,23 +867,23 @@ func TestAssets(t *testing.T) {
func testAssetsByVersion(t *testing.T, assetsTestProgram string, version uint64) {
for _, field := range AssetHoldingFieldNames {
- fs := assetHoldingFieldSpecByName[field]
+ fs := AssetHoldingFieldSpecByName[field]
if fs.version <= version && !strings.Contains(assetsTestProgram, field) {
t.Errorf("TestAssets missing field %v", field)
}
}
for _, field := range AssetParamsFieldNames {
- fs := assetParamsFieldSpecByName[field]
+ fs := AssetParamsFieldSpecByName[field]
if fs.version <= version && !strings.Contains(assetsTestProgram, field) {
t.Errorf("TestAssets missing field %v", field)
}
}
txn := makeSampleTxn()
- pre := defaultEvalParamsWithVersion(nil, &txn, directRefEnabledVersion-1)
+ pre := defaultEvalParamsWithVersion(&txn, directRefEnabledVersion-1)
require.GreaterOrEqual(t, version, uint64(directRefEnabledVersion))
- now := defaultEvalParamsWithVersion(nil, &txn, version)
- ledger := logictest.MakeLedger(
+ now := defaultEvalParamsWithVersion(&txn, version)
+ ledger := MakeLedger(
map[basics.Address]uint64{
txn.Txn.Sender: 1,
},
@@ -881,7 +900,7 @@ func testAssetsByVersion(t *testing.T, assetsTestProgram string, version uint64)
// it wasn't legal to use a direct ref for account
testProg(t, `byte "aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00"; int 54; asset_holding_get AssetBalance`,
- directRefEnabledVersion-1, expect{3, "asset_holding_get AssetBalance arg 0 wanted type uint64..."})
+ directRefEnabledVersion-1, Expect{3, "asset_holding_get AssetBalance arg 0 wanted type uint64..."})
// but it is now (empty asset yields 0,0 on stack)
testApp(t, `byte "aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00"; int 55; asset_holding_get AssetBalance; ==`, now)
// This is receiver, who is in Assets array
@@ -924,7 +943,7 @@ func testAssetsByVersion(t *testing.T, assetsTestProgram string, version uint64)
testApp(t, strings.Replace(assetsTestProgram, "int 55", "int 0", -1), now)
// but old code cannot
- testProg(t, strings.Replace(assetsTestProgram, "int 0//account", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00\"", -1), directRefEnabledVersion-1, expect{3, "asset_holding_get AssetBalance arg 0 wanted type uint64..."})
+ testProg(t, strings.Replace(assetsTestProgram, "int 0//account", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00\"", -1), directRefEnabledVersion-1, Expect{3, "asset_holding_get AssetBalance arg 0 wanted type uint64..."})
if version < 5 {
// Can't run these with AppCreator anyway
@@ -954,7 +973,7 @@ intc_2 // 1
ops := testProg(t, source, version)
require.Equal(t, OpsByName[now.Proto.LogicSigVersion]["asset_holding_get"].Opcode, ops.Program[8])
ops.Program[9] = 0x02
- _, err := EvalStateful(ops.Program, now)
+ _, err := EvalApp(ops.Program, 0, 0, now)
require.Error(t, err)
require.Contains(t, err.Error(), "invalid asset_holding_get field 2")
@@ -979,7 +998,7 @@ intc_1
ops = testProg(t, source, version)
require.Equal(t, OpsByName[now.Proto.LogicSigVersion]["asset_params_get"].Opcode, ops.Program[6])
ops.Program[7] = 0x20
- _, err = EvalStateful(ops.Program, now)
+ _, err = EvalApp(ops.Program, 0, 0, now)
require.Error(t, err)
require.Contains(t, err.Error(), "invalid asset_params_get field 32")
@@ -1041,9 +1060,9 @@ intc_1
func TestAppParams(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- ep, ledger := makeSampleEnv()
- ledger.NewAccount(ep.Txn.Txn.Sender, 1)
- ledger.NewApp(ep.Txn.Txn.Sender, 100, basics.AppParams{})
+ ep, tx, ledger := makeSampleEnv()
+ ledger.NewAccount(tx.Sender, 1)
+ ledger.NewApp(tx.Sender, 100, basics.AppParams{})
/* app id is in ForeignApps, but does not exist */
source := "int 56; app_params_get AppExtraProgramPages; int 0; ==; assert; int 0; =="
@@ -1053,6 +1072,29 @@ func TestAppParams(t *testing.T) {
testApp(t, source, ep)
}
+func TestAcctParams(t *testing.T) {
+ partitiontest.PartitionTest(t)
+ t.Parallel()
+ ep, tx, ledger := makeSampleEnv()
+
+ source := "int 0; acct_params_get AcctBalance; !; assert; int 0; =="
+ testApp(t, source, ep)
+
+ source = "int 0; acct_params_get AcctMinBalance; !; assert; int 1001; =="
+ testApp(t, source, ep)
+
+ ledger.NewAccount(tx.Sender, 42)
+
+ source = "int 0; acct_params_get AcctBalance; assert; int 42; =="
+ testApp(t, source, ep)
+
+ source = "int 0; acct_params_get AcctMinBalance; assert; int 1001; =="
+ testApp(t, source, ep)
+
+ source = "int 0; acct_params_get AcctAuthAddr; assert; global ZeroAddress; =="
+ testApp(t, source, ep)
+}
+
func TestAppLocalReadWriteDeleteErrors(t *testing.T) {
partitiontest.PartitionTest(t)
@@ -1116,16 +1158,12 @@ intc_1
ops := testProg(t, source, AssemblerMaxVersion)
txn := makeSampleTxn()
- ep := defaultEvalParams(nil, nil)
- ep.Txn = &txn
- ep.Txn.Txn.ApplicationID = 100
- err := CheckStateful(ops.Program, ep)
+ txn.Txn.ApplicationID = 100
+ ep := defaultEvalParams(&txn)
+ err := CheckContract(ops.Program, ep)
require.NoError(t, err)
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "ledger not available")
- ledger := logictest.MakeLedger(
+ ledger := MakeLedger(
map[basics.Address]uint64{
txn.Txn.Sender: 1,
},
@@ -1135,19 +1173,20 @@ intc_1
saved := ops.Program[firstCmdOffset]
require.Equal(t, OpsByName[0]["intc_0"].Opcode, saved)
ops.Program[firstCmdOffset] = OpsByName[0]["intc_1"].Opcode
- _, err = EvalStateful(ops.Program, ep)
+ _, err = EvalApp(ops.Program, 0, 100, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "invalid Account reference 100")
ops.Program[firstCmdOffset] = saved
- _, err = EvalStateful(ops.Program, ep)
+ _, err = EvalApp(ops.Program, 0, 100, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "no app for account")
ledger.NewApp(txn.Txn.Sender, 100, basics.AppParams{})
+ ledger.NewLocals(txn.Txn.Sender, 100)
if name == "read" {
- _, err = EvalStateful(ops.Program, ep)
+ _, err = EvalApp(ops.Program, 0, 100, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "err opcode") // no such key
}
@@ -1156,11 +1195,10 @@ intc_1
ledger.NewLocal(txn.Txn.Sender, 100, "ALGOA", basics.TealValue{Type: basics.TealUintType, Uint: 1})
ledger.Reset()
- pass, err := EvalStateful(ops.Program, ep)
+ pass, err := EvalApp(ops.Program, 0, 100, ep)
require.NoError(t, err)
require.True(t, pass)
- delta, err := ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta := ep.TxnGroup[0].EvalDelta
require.Empty(t, delta.GlobalDelta)
expLocal := 1
if name == "read" {
@@ -1176,17 +1214,17 @@ func TestAppLocalStateReadWrite(t *testing.T) {
t.Parallel()
- ep := defaultEvalParams(nil, nil)
txn := makeSampleTxn()
txn.Txn.ApplicationID = 100
- ep.Txn = &txn
- ledger := logictest.MakeLedger(
+ ep := defaultEvalParams(&txn)
+ ledger := MakeLedger(
map[basics.Address]uint64{
txn.Txn.Sender: 1,
},
)
ep.Ledger = ledger
ledger.NewApp(txn.Txn.Sender, 100, basics.AppParams{})
+ ledger.NewLocals(txn.Txn.Sender, 100)
// write int and bytes values
source := `int 0 // account
@@ -1217,15 +1255,7 @@ int 0x77
==
&&
`
- ops, err := AssembleStringWithVersion(source, AssemblerMaxVersion)
- require.NoError(t, err)
- err = CheckStateful(ops.Program, ep)
- require.NoError(t, err)
- pass, err := EvalStateful(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- delta, err := ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta := testApp(t, source, ep)
require.Empty(t, 0, delta.GlobalDelta)
require.Len(t, delta.LocalDeltas, 1)
@@ -1260,14 +1290,7 @@ int 0x77
algoValue := basics.TealValue{Type: basics.TealUintType, Uint: 0x77}
ledger.NewLocal(txn.Txn.Sender, 100, "ALGO", algoValue)
- ops = testProg(t, source, AssemblerMaxVersion)
- err = CheckStateful(ops.Program, ep)
- require.NoError(t, err)
- pass, err = EvalStateful(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Empty(t, delta.GlobalDelta)
require.Empty(t, delta.LocalDeltas)
@@ -1296,12 +1319,7 @@ exist2:
ledger.NewLocal(txn.Txn.Sender, 100, "ALGO", algoValue)
ledger.NoLocal(txn.Txn.Sender, 100, "ALGOA")
- ops = testProg(t, source, AssemblerMaxVersion)
- pass, err = EvalStateful(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Empty(t, delta.GlobalDelta)
require.Empty(t, delta.LocalDeltas)
@@ -1316,12 +1334,7 @@ int 1
ledger.NewLocal(txn.Txn.Sender, 100, "ALGO", algoValue)
ledger.NoLocal(txn.Txn.Sender, 100, "ALGOA")
- ops = testProg(t, source, AssemblerMaxVersion)
- pass, err = EvalStateful(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Empty(t, delta.GlobalDelta)
require.Len(t, delta.LocalDeltas, 1)
require.Len(t, delta.LocalDeltas[0], 1)
@@ -1348,12 +1361,7 @@ int 0x78
ledger.NewLocal(txn.Txn.Sender, 100, "ALGO", algoValue)
ledger.NoLocal(txn.Txn.Sender, 100, "ALGOA")
- ops = testProg(t, source, AssemblerMaxVersion)
- pass, err = EvalStateful(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Empty(t, delta.GlobalDelta)
require.Len(t, delta.LocalDeltas, 1)
require.Len(t, delta.LocalDeltas[0], 1)
@@ -1378,12 +1386,7 @@ app_local_put
ledger.NewLocal(txn.Txn.Sender, 100, "ALGO", algoValue)
ledger.NoLocal(txn.Txn.Sender, 100, "ALGOA")
- ops = testProg(t, source, AssemblerMaxVersion)
- pass, err = EvalStateful(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Empty(t, delta.GlobalDelta)
require.Len(t, delta.LocalDeltas, 1)
require.Len(t, delta.LocalDeltas[0], 1)
@@ -1414,17 +1417,10 @@ int 1
ledger.NewLocal(txn.Txn.Sender, 100, "ALGO", algoValue)
ledger.NoLocal(txn.Txn.Sender, 100, "ALGOA")
- ledger.NewAccount(ep.Txn.Txn.Receiver, 500)
+ ledger.NewAccount(txn.Txn.Receiver, 500)
ledger.NewLocals(txn.Txn.Receiver, 100)
- ops = testProg(t, source, AssemblerMaxVersion)
- err = CheckStateful(ops.Program, ep)
- require.NoError(t, err)
- pass, err = EvalStateful(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Empty(t, delta.GlobalDelta)
require.Len(t, delta.LocalDeltas, 2)
require.Len(t, delta.LocalDeltas[0], 2)
@@ -1482,42 +1478,21 @@ int 1
ops, err := AssembleStringWithVersion(source, AssemblerMaxVersion)
require.NoError(t, err)
- txn := makeSampleTxn()
- ep := defaultEvalParams(nil, nil)
- ep.Txn = &txn
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "ledger not available")
-
- ledger := logictest.MakeLedger(
- map[basics.Address]uint64{
- txn.Txn.Sender: 1,
- },
- )
- ep.Ledger = ledger
-
- txn.Txn.ApplicationID = 100
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "no such app")
+ ep, txn, ledger := makeSampleEnv()
+ txn.ApplicationID = basics.AppIndex(100)
+ testAppBytes(t, ops.Program, ep, "no such app")
- ledger.NewApp(txn.Txn.Sender, 100, makeApp(0, 0, 1, 0))
+ ledger.NewApp(txn.Sender, 100, makeApp(0, 0, 1, 0))
// a special test for read
if name == "read" {
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "err opcode") // no such key
+ testAppBytes(t, ops.Program, ep, "err opcode") // no such key
}
ledger.NewGlobal(100, "ALGO", basics.TealValue{Type: basics.TealUintType, Uint: 0x77})
ledger.Reset()
- pass, err := EvalStateful(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- delta, err := ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta := testAppBytes(t, ops.Program, ep)
require.Empty(t, delta.LocalDeltas)
})
}
@@ -1587,12 +1562,11 @@ int 0x77
==
&&
`
- ep := defaultEvalParams(nil, nil)
txn := makeSampleTxn()
txn.Txn.ApplicationID = 100
txn.Txn.ForeignApps = []basics.AppIndex{txn.Txn.ApplicationID}
- ep.Txn = &txn
- ledger := logictest.MakeLedger(
+ ep := defaultEvalParams(&txn)
+ ledger := MakeLedger(
map[basics.Address]uint64{
txn.Txn.Sender: 1,
},
@@ -1600,15 +1574,7 @@ int 0x77
ep.Ledger = ledger
ledger.NewApp(txn.Txn.Sender, 100, basics.AppParams{})
- ops, err := AssembleStringWithVersion(source, AssemblerMaxVersion)
- require.NoError(t, err)
- err = CheckStateful(ops.Program, ep)
- require.NoError(t, err)
- pass, err := EvalStateful(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- delta, err := ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta := testApp(t, source, ep)
require.Len(t, delta.GlobalDelta, 2)
require.Empty(t, delta.LocalDeltas)
@@ -1637,13 +1603,7 @@ int 0x77
algoValue := basics.TealValue{Type: basics.TealUintType, Uint: 0x77}
ledger.NewGlobal(100, "ALGO", algoValue)
- ops = testProg(t, source, AssemblerMaxVersion)
- pass, err = EvalStateful(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
-
+ delta = testApp(t, source, ep)
require.Empty(t, delta.GlobalDelta)
require.Empty(t, delta.LocalDeltas)
@@ -1667,12 +1627,7 @@ int 0x77
ledger.NoGlobal(100, "ALGOA")
ledger.NewGlobal(100, "ALGO", algoValue)
- ops = testProg(t, source, AssemblerMaxVersion)
- pass, err = EvalStateful(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Empty(t, delta.GlobalDelta)
require.Empty(t, delta.LocalDeltas)
@@ -1712,20 +1667,7 @@ byte 0x414c474f
ledger.NoGlobal(100, "ALGOA")
ledger.NewGlobal(100, "ALGO", algoValue)
- ops = testProg(t, source, AssemblerMaxVersion)
- sb := strings.Builder{}
- ep.Trace = &sb
- err = CheckStateful(ops.Program, ep)
- require.NoError(t, err)
- pass, err = EvalStateful(ops.Program, ep)
- if !pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.NoError(t, err)
- require.True(t, pass)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Len(t, delta.GlobalDelta, 2)
require.Empty(t, delta.LocalDeltas)
@@ -1759,12 +1701,11 @@ ok2:
byte "myval"
==
`
- ep := defaultEvalParams(nil, nil)
txn := makeSampleTxn()
txn.Txn.ApplicationID = 100
txn.Txn.ForeignApps = []basics.AppIndex{txn.Txn.ApplicationID, 101}
- ep.Txn = &txn
- ledger := logictest.MakeLedger(
+ ep := defaultEvalParams(&txn)
+ ledger := MakeLedger(
map[basics.Address]uint64{
txn.Txn.Sender: 1,
},
@@ -1806,11 +1747,10 @@ app_global_get
int 7
==
`
- ep := defaultEvalParams(nil, nil)
txn := makeSampleTxn()
txn.Txn.ApplicationID = 100
- ep.Txn = &txn
- ledger := logictest.MakeLedger(
+ ep := defaultEvalParams(&txn)
+ ledger := MakeLedger(
map[basics.Address]uint64{
txn.Txn.Sender: 1,
},
@@ -1853,11 +1793,10 @@ err
ok:
int 1
`
- ep := defaultEvalParams(nil, nil)
txn := makeSampleTxn()
txn.Txn.ApplicationID = 100
- ep.Txn = &txn
- ledger := logictest.MakeLedger(
+ ep := defaultEvalParams(&txn)
+ ledger := MakeLedger(
map[basics.Address]uint64{
txn.Txn.Sender: 1,
},
@@ -1884,7 +1823,7 @@ byte 0x414c474f
app_global_get_ex
== // two zeros
`
- ep.Txn.Txn.ForeignApps = []basics.AppIndex{txn.Txn.ApplicationID}
+ ep.TxnGroup[0].Txn.ForeignApps = []basics.AppIndex{txn.Txn.ApplicationID}
delta = testApp(t, source, ep)
require.Len(t, delta.GlobalDelta, 1)
vd := delta.GlobalDelta["ALGO"]
@@ -2020,38 +1959,33 @@ err
ok:
int 1
`
- ep := defaultEvalParams(nil, nil)
txn := makeSampleTxn()
txn.Txn.ApplicationID = 100
- ep.Txn = &txn
- ledger := logictest.MakeLedger(
+ ep := defaultEvalParams(&txn)
+ ledger := MakeLedger(
map[basics.Address]uint64{
txn.Txn.Sender: 1,
},
)
ep.Ledger = ledger
ledger.NewApp(txn.Txn.Sender, 100, basics.AppParams{})
+ ledger.NewLocals(txn.Txn.Sender, 100)
ledger.NewAccount(txn.Txn.Receiver, 1)
ledger.NewLocals(txn.Txn.Receiver, 100)
sb := strings.Builder{}
ep.Trace = &sb
- testApp(t, source, ep)
- delta, err := ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta := testApp(t, source, ep)
require.Equal(t, 0, len(delta.GlobalDelta))
require.Equal(t, 2, len(delta.LocalDeltas))
ledger.Reset()
// test that app_local_put and _app_local_del can use byte addresses
- testApp(t, strings.Replace(source, "int 0 // sender", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00\"", -1), ep)
- // But won't compile in old teal
+ delta = testApp(t, strings.Replace(source, "int 0 // sender", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00\"", -1), ep)
+ // But won't even compile in old teal
testProg(t, strings.Replace(source, "int 0 // sender", "byte \"aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00\"", -1), directRefEnabledVersion-1,
- expect{4, "app_local_put arg 0 wanted..."}, expect{11, "app_local_del arg 0 wanted..."})
-
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ Expect{4, "app_local_put arg 0 wanted..."}, Expect{11, "app_local_del arg 0 wanted..."})
require.Equal(t, 0, len(delta.GlobalDelta))
require.Equal(t, 2, len(delta.LocalDeltas))
@@ -2075,9 +2009,7 @@ app_local_get_ex
== // two zeros
`
- testApp(t, source, ep)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Equal(t, 0, len(delta.GlobalDelta))
require.Equal(t, 1, len(delta.LocalDeltas))
vd := delta.LocalDeltas[0]["ALGO"]
@@ -2105,9 +2037,7 @@ byte 0x414c474f41
int 0x78
app_local_put
`
- testApp(t, source, ep)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Equal(t, 0, len(delta.GlobalDelta))
require.Equal(t, 1, len(delta.LocalDeltas))
vd = delta.LocalDeltas[0]["ALGOA"]
@@ -2131,9 +2061,7 @@ int 0x78
app_local_put
int 1
`
- testApp(t, source, ep)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Equal(t, 0, len(delta.GlobalDelta))
require.Equal(t, 1, len(delta.LocalDeltas))
vd = delta.LocalDeltas[0]["ALGO"]
@@ -2160,9 +2088,7 @@ byte 0x414c474f
app_local_del
int 1
`
- testApp(t, source, ep)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Equal(t, 0, len(delta.GlobalDelta))
require.Equal(t, 1, len(delta.LocalDeltas))
vd = delta.LocalDeltas[0]["ALGO"]
@@ -2189,9 +2115,7 @@ byte 0x414c474f41
app_local_del
int 1
`
- testApp(t, source, ep)
- delta, err = ledger.GetDelta(&ep.Txn.Txn)
- require.NoError(t, err)
+ delta = testApp(t, source, ep)
require.Equal(t, 0, len(delta.GlobalDelta))
require.Equal(t, 1, len(delta.LocalDeltas))
require.Equal(t, 1, len(delta.LocalDeltas[0]))
@@ -2200,22 +2124,17 @@ int 1
func TestEnumFieldErrors(t *testing.T) {
partitiontest.PartitionTest(t)
- ep := defaultEvalParams(nil, nil)
-
source := `txn Amount`
- origTxnType := TxnFieldTypes[Amount]
- TxnFieldTypes[Amount] = StackBytes
+ origSpec := txnFieldSpecByField[Amount]
+ changed := origSpec
+ changed.ftype = StackBytes
+ txnFieldSpecByField[Amount] = changed
defer func() {
- TxnFieldTypes[Amount] = origTxnType
+ txnFieldSpecByField[Amount] = origSpec
}()
- ops := testProg(t, source, AssemblerMaxVersion)
- _, err := Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "Amount expected field type is []byte but got uint64")
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "Amount expected field type is []byte but got uint64")
+ testLogic(t, source, AssemblerMaxVersion, defaultEvalParams(nil), "Amount expected field type is []byte but got uint64")
+ testApp(t, source, defaultEvalParams(nil), "Amount expected field type is []byte but got uint64")
source = `global MinTxnFee`
@@ -2227,20 +2146,11 @@ func TestEnumFieldErrors(t *testing.T) {
globalFieldSpecByField[MinTxnFee] = origMinTxnFs
}()
- ops = testProg(t, source, AssemblerMaxVersion)
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "MinTxnFee expected field type is []byte but got uint64")
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "MinTxnFee expected field type is []byte but got uint64")
+ testLogic(t, source, AssemblerMaxVersion, defaultEvalParams(nil), "MinTxnFee expected field type is []byte but got uint64")
+ testApp(t, source, defaultEvalParams(nil), "MinTxnFee expected field type is []byte but got uint64")
- txn := makeSampleTxn()
- ledger := logictest.MakeLedger(
- map[basics.Address]uint64{
- txn.Txn.Sender: 1,
- },
- )
+ ep, tx, ledger := makeSampleEnv()
+ ledger.NewAccount(tx.Sender, 1)
params := basics.AssetParams{
Total: 1000,
Decimals: 2,
@@ -2248,20 +2158,17 @@ func TestEnumFieldErrors(t *testing.T) {
UnitName: "ALGO",
AssetName: "",
URL: string(protocol.PaymentTx),
- Manager: txn.Txn.Sender,
- Reserve: txn.Txn.Receiver,
- Freeze: txn.Txn.Receiver,
- Clawback: txn.Txn.Receiver,
+ Manager: tx.Sender,
+ Reserve: tx.Receiver,
+ Freeze: tx.Receiver,
+ Clawback: tx.Receiver,
}
- ledger.NewAsset(txn.Txn.Sender, 55, params)
-
- ep.Txn = &txn
- ep.Ledger = ledger
+ ledger.NewAsset(tx.Sender, 55, params)
source = `int 0
int 55
asset_holding_get AssetBalance
-pop
+assert
`
origBalanceFs := assetHoldingFieldSpecByField[AssetBalance]
badBalanceFs := origBalanceFs
@@ -2271,14 +2178,11 @@ pop
assetHoldingFieldSpecByField[AssetBalance] = origBalanceFs
}()
- ops = testProg(t, source, AssemblerMaxVersion)
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "AssetBalance expected field type is []byte but got uint64")
+ testApp(t, source, ep, "AssetBalance expected field type is []byte but got uint64")
source = `int 0
asset_params_get AssetTotal
-pop
+assert
`
origTotalFs := assetParamsFieldSpecByField[AssetTotal]
badTotalFs := origTotalFs
@@ -2288,44 +2192,32 @@ pop
assetParamsFieldSpecByField[AssetTotal] = origTotalFs
}()
- ops = testProg(t, source, AssemblerMaxVersion)
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "AssetTotal expected field type is []byte but got uint64")
+ testApp(t, source, ep, "AssetTotal expected field type is []byte but got uint64")
}
func TestReturnTypes(t *testing.T) {
partitiontest.PartitionTest(t)
- // Ensure all opcodes return values they supposed to according to the OpSpecs table
+ // Ensure all opcodes return values they are supposed to according to the OpSpecs table
t.Parallel()
typeToArg := map[StackType]string{
StackUint64: "int 1\n",
StackAny: "int 1\n",
StackBytes: "byte 0x33343536\n",
}
- ep := defaultEvalParams(nil, nil)
- txn := makeSampleTxn()
- txn.Txn.Type = protocol.ApplicationCallTx
- txgroup := makeSampleTxnGroup(txn)
- ep.Txn = &txn
- ep.TxnGroup = txgroup
- ep.Txn.Txn.ApplicationID = 1
- ep.Txn.Txn.ForeignApps = []basics.AppIndex{txn.Txn.ApplicationID}
- ep.Txn.Txn.ForeignAssets = []basics.AssetIndex{basics.AssetIndex(1), basics.AssetIndex(1)}
- ep.GroupIndex = 1
- ep.PastSideEffects = MakePastSideEffects(len(txgroup))
- txn.Lsig.Args = [][]byte{
+ ep, tx, ledger := makeSampleEnv()
+ tx.Type = protocol.ApplicationCallTx
+ tx.ApplicationID = 1
+ tx.ForeignApps = []basics.AppIndex{tx.ApplicationID}
+ tx.ForeignAssets = []basics.AssetIndex{basics.AssetIndex(1), basics.AssetIndex(1)}
+ ep.TxnGroup[0].Lsig.Args = [][]byte{
[]byte("aoeu"),
[]byte("aoeu"),
[]byte("aoeu2"),
[]byte("aoeu3"),
}
- ledger := logictest.MakeLedger(
- map[basics.Address]uint64{
- txn.Txn.Sender: 1,
- },
- )
+ ep.pastScratch[0] = &scratchSpace{} // for gload
+ ledger.NewAccount(tx.Sender, 1)
params := basics.AssetParams{
Total: 1000,
Decimals: 2,
@@ -2333,23 +2225,20 @@ func TestReturnTypes(t *testing.T) {
UnitName: "ALGO",
AssetName: "",
URL: string(protocol.PaymentTx),
- Manager: txn.Txn.Sender,
- Reserve: txn.Txn.Receiver,
- Freeze: txn.Txn.Receiver,
- Clawback: txn.Txn.Receiver,
+ Manager: tx.Sender,
+ Reserve: tx.Receiver,
+ Freeze: tx.Receiver,
+ Clawback: tx.Receiver,
}
- ledger.NewAsset(txn.Txn.Sender, 1, params)
- ledger.NewApp(txn.Txn.Sender, 1, basics.AppParams{})
- ledger.SetTrackedCreatable(0, basics.CreatableLocator{Index: 1})
- ledger.NewAccount(txn.Txn.Receiver, 1000000)
- ledger.NewLocals(txn.Txn.Receiver, 1)
+ ledger.NewAsset(tx.Sender, 1, params)
+ ledger.NewApp(tx.Sender, 1, basics.AppParams{})
+ ledger.NewAccount(tx.Receiver, 1000000)
+ ledger.NewLocals(tx.Receiver, 1)
key, err := hex.DecodeString("33343536")
require.NoError(t, err)
algoValue := basics.TealValue{Type: basics.TealUintType, Uint: 0x77}
- ledger.NewLocal(txn.Txn.Receiver, 1, string(key), algoValue)
- ledger.NewAccount(basics.AppIndex(1).Address(), 1000000)
-
- ep.Ledger = ledger
+ ledger.NewLocal(tx.Receiver, 1, string(key), algoValue)
+ ledger.NewAccount(appAddr(1), 1000000)
specialCmd := map[string]string{
"txn": "txn Sender",
@@ -2362,6 +2251,7 @@ func TestReturnTypes(t *testing.T) {
"store": "store 0",
"gload": "gload 0 0",
"gloads": "gloads 0",
+ "gloadss": "pop; pop; int 0; int 1; gloadss", // Needs txn index = 0 to work
"gaid": "gaid 0",
"dig": "dig 0",
"cover": "cover 0",
@@ -2380,19 +2270,26 @@ func TestReturnTypes(t *testing.T) {
"asset_params_get": "asset_params_get AssetTotal",
"asset_holding_get": "asset_holding_get AssetBalance",
"gtxns": "gtxns Sender",
- "gtxnsa": "gtxnsa ApplicationArgs 0",
+ "gtxnsa": "pop; int 0; gtxnsa ApplicationArgs 0",
"pushint": "pushint 7272",
"pushbytes": `pushbytes "jojogoodgorilla"`,
"app_params_get": "app_params_get AppGlobalNumUint",
+ "acct_params_get": "acct_params_get AcctMinBalance",
"extract": "extract 0 2",
"txnas": "txnas ApplicationArgs",
"gtxnas": "gtxnas 0 ApplicationArgs",
"gtxnsas": "pop; pop; int 0; int 0; gtxnsas ApplicationArgs",
"args": "args",
"itxn": "itxn_begin; int pay; itxn_field TypeEnum; itxn_submit; itxn CreatedAssetID",
- // This next one is a cop out. Can't use itxna Logs until we have inner appl
- "itxna": "itxn_begin; int pay; itxn_field TypeEnum; itxn_submit; itxn NumLogs",
- "base64_decode": `pushbytes "YWJjMTIzIT8kKiYoKSctPUB+"; base64_decode StdEncoding; pushbytes "abc123!?$*&()'-=@~"; ==; pushbytes "YWJjMTIzIT8kKiYoKSctPUB-"; base64_decode URLEncoding; pushbytes "abc123!?$*&()'-=@~"; ==; &&; assert`,
+ "itxna": "itxn_begin; int pay; itxn_field TypeEnum; itxn_submit; itxna Accounts 0",
+ "gitxn": "itxn_begin; int pay; itxn_field TypeEnum; itxn_submit; gitxn 0 Sender",
+ "gitxna": "itxn_begin; int pay; itxn_field TypeEnum; itxn_submit; gitxna 0 Accounts 0",
+ "base64_decode": `pushbytes "YWJjMTIzIT8kKiYoKSctPUB+"; base64_decode StdEncoding; pushbytes "abc123!?$*&()'-=@~"; ==; pushbytes "YWJjMTIzIT8kKiYoKSctPUB-"; base64_decode URLEncoding; pushbytes "abc123!?$*&()'-=@~"; ==; &&; assert`,
+ }
+
+ /* Make sure the specialCmd tests the opcode in question */
+ for opcode, cmd := range specialCmd {
+ assert.Contains(t, cmd, opcode)
}
// these require special input data and tested separately
@@ -2423,19 +2320,23 @@ func TestReturnTypes(t *testing.T) {
source := sb.String()
ops := testProg(t, source, AssemblerMaxVersion)
- var trace strings.Builder
- ep.Trace = &trace
-
var cx EvalContext
cx.EvalParams = ep
cx.runModeFlags = m
+ cx.appID = 1
+
+ // These set conditions for some ops that examine the group.
+ // This convinces them all to work. Revisit.
+ cx.Txn = &ep.TxnGroup[0]
+ cx.GroupIndex = 1
+ cx.TxnGroup[0].ConfigAsset = 100
eval(ops.Program, &cx)
require.Equal(
t,
len(spec.Returns), len(cx.stack),
- fmt.Sprintf("\n%s%s expected to return %d values but stack has %d", trace.String(), spec.Name, len(spec.Returns), len(cx.stack)),
+ fmt.Sprintf("\n%s%s expected to return %d values but stack is %v", ep.Trace.String(), spec.Name, len(spec.Returns), cx.stack),
)
for i := 0; i < len(spec.Returns); i++ {
sp := len(cx.stack) - 1 - i
@@ -2454,7 +2355,7 @@ func TestReturnTypes(t *testing.T) {
func TestRound(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- ep, _ := makeSampleEnv()
+ ep, _, _ := makeSampleEnv()
source := "global Round; int 1; >="
testApp(t, source, ep)
}
@@ -2462,7 +2363,7 @@ func TestRound(t *testing.T) {
func TestLatestTimestamp(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- ep, _ := makeSampleEnv()
+ ep, _, _ := makeSampleEnv()
source := "global LatestTimestamp; int 1; >="
testApp(t, source, ep)
}
@@ -2470,8 +2371,8 @@ func TestLatestTimestamp(t *testing.T) {
func TestCurrentApplicationID(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 42, basics.AppParams{})
+ ep, tx, _ := makeSampleEnv()
+ tx.ApplicationID = 42
source := "global CurrentApplicationID; int 42; =="
testApp(t, source, ep)
}
@@ -2479,7 +2380,7 @@ func TestCurrentApplicationID(t *testing.T) {
func TestAppLoop(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- ep, _ := makeSampleEnv()
+ ep, _, _ := makeSampleEnv()
stateful := "global CurrentApplicationID; pop;"
@@ -2504,25 +2405,29 @@ func TestPooledAppCallsVerifyOp(t *testing.T) {
pop
int 1`
- ep, _ := makeSampleEnv()
- ep.Proto.EnableAppCostPooling = true
- ep.PooledApplicationBudget = new(uint64)
+ ledger := MakeLedger(nil)
+ call := transactions.SignedTxn{Txn: transactions.Transaction{Type: protocol.ApplicationCallTx}}
// Simulate test with 2 grouped txn
- *ep.PooledApplicationBudget = uint64(ep.Proto.MaxAppProgramCost * 2)
- testApp(t, source, ep, "pc=107 dynamic cost budget exceeded, executing ed25519verify: remaining budget is 1400 but program cost was 1905")
+ testApps(t, []string{source, ""}, []transactions.SignedTxn{call, call}, LogicVersion, ledger,
+ Expect{0, "pc=107 dynamic cost budget exceeded, executing ed25519verify: remaining budget is 1400 but program cost was 1905"})
// Simulate test with 3 grouped txn
- *ep.PooledApplicationBudget = uint64(ep.Proto.MaxAppProgramCost * 3)
- testApp(t, source, ep)
+ testApps(t, []string{source, "", ""}, []transactions.SignedTxn{call, call, call}, LogicVersion, ledger)
}
-func TestAppAddress(t *testing.T) {
- ep, ledger := makeSampleEnv()
- ledger.NewApp(ep.Txn.Txn.Receiver, 888, basics.AppParams{})
- source := fmt.Sprintf("global CurrentApplicationAddress; addr %s; ==;", basics.AppIndex(888).Address())
+func appAddr(id int) basics.Address {
+ return basics.AppIndex(id).Address()
+}
+
+func TestAppInfo(t *testing.T) {
+ ep, tx, ledger := makeSampleEnv()
+ require.Equal(t, 888, int(tx.ApplicationID))
+ ledger.NewApp(tx.Receiver, 888, basics.AppParams{})
+ testApp(t, "global CurrentApplicationID; int 888; ==;", ep)
+ source := fmt.Sprintf("global CurrentApplicationAddress; addr %s; ==;", appAddr(888))
testApp(t, source, ep)
- source = fmt.Sprintf("int 0; app_params_get AppAddress; assert; addr %s; ==;", basics.AppIndex(888).Address())
+ source = fmt.Sprintf("int 0; app_params_get AppAddress; assert; addr %s; ==;", appAddr(888))
testApp(t, source, ep)
// To document easy construction:
@@ -2531,3 +2436,54 @@ func TestAppAddress(t *testing.T) {
source = fmt.Sprintf("int 0; app_params_get AppAddress; assert; addr %s; ==;", a)
testApp(t, source, ep)
}
+
+func TestBudget(t *testing.T) {
+ ep := defaultEvalParams(nil)
+ source := `
+global OpcodeBudget
+int 699
+==
+assert
+global OpcodeBudget
+int 695
+==
+`
+ testApp(t, source, ep)
+}
+
+func TestSelfMutate(t *testing.T) {
+ ep, _, ledger := makeSampleEnv()
+
+ /* In order to test the added protection of mutableAccountReference, we're
+ going to set up a ledger in which an app account is opted into
+ itself. That was impossible before v6, and indeed we did not have the
+ extra mutable reference check then. */
+ ledger.NewLocals(basics.AppIndex(888).Address(), 888)
+ ledger.NewLocal(basics.AppIndex(888).Address(), 888, "hey",
+ basics.TealValue{Type: basics.TealUintType, Uint: 77})
+
+ source := `
+global CurrentApplicationAddress
+byte "hey"
+int 42
+app_local_put
+`
+ testApp(t, source, ep, "invalid Account reference for mutation")
+
+ source = `
+global CurrentApplicationAddress
+byte "hey"
+app_local_del
+`
+ testApp(t, source, ep, "invalid Account reference for mutation")
+
+ /* But let's just check normal access is working properly. */
+ source = `
+global CurrentApplicationAddress
+byte "hey"
+app_local_get
+int 77
+==
+`
+ testApp(t, source, ep)
+}
diff --git a/data/transactions/logic/eval_test.go b/data/transactions/logic/eval_test.go
index 72c07f01f..3aea31b1b 100644
--- a/data/transactions/logic/eval_test.go
+++ b/data/transactions/logic/eval_test.go
@@ -25,6 +25,7 @@ import (
"strings"
"testing"
+ "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/algorand/go-algorand/config"
@@ -32,20 +33,19 @@ import (
"github.com/algorand/go-algorand/data/basics"
"github.com/algorand/go-algorand/data/bookkeeping"
"github.com/algorand/go-algorand/data/transactions"
- "github.com/algorand/go-algorand/data/transactions/logictest"
"github.com/algorand/go-algorand/logging"
"github.com/algorand/go-algorand/protocol"
"github.com/algorand/go-algorand/test/partitiontest"
)
-// Note that most of the tests use defaultEvalProto/defaultEvalParams as evaluator version so that
+// Note that most of the tests use makeTestProto/defaultEvalParams as evaluator version so that
// we check that TEAL v1 and v2 programs are compatible with the latest evaluator
-func defaultEvalProto() config.ConsensusParams {
- return defaultEvalProtoWithVersion(LogicVersion)
+func makeTestProto() *config.ConsensusParams {
+ return makeTestProtoV(LogicVersion)
}
-func defaultEvalProtoWithVersion(version uint64) config.ConsensusParams {
- return config.ConsensusParams{
+func makeTestProtoV(version uint64) *config.ConsensusParams {
+ return &config.ConsensusParams{
LogicSigVersion: version,
LogicSigMaxCost: 20000,
Application: version >= appsEnabledVersion,
@@ -79,45 +79,75 @@ func defaultEvalProtoWithVersion(version uint64) config.ConsensusParams {
EnableFeePooling: true,
// Chosen to be different from one another and from normal proto
- MaxAppTxnAccounts: 3,
- MaxAppTxnForeignApps: 5,
- MaxAppTxnForeignAssets: 6,
- }
-}
+ MaxAppTxnAccounts: 3,
+ MaxAppTxnForeignApps: 5,
+ MaxAppTxnForeignAssets: 6,
+ MaxAppTotalTxnReferences: 7,
+
+ MaxAppArgs: 12,
+ MaxAppTotalArgLen: 800,
+
+ MaxAppProgramLen: 900,
+ MaxAppTotalProgramLen: 1200, // Weird, but better tests
+ MaxExtraAppProgramPages: 2,
-func defaultEvalParamsV1(sb *strings.Builder, txn *transactions.SignedTxn) EvalParams {
- return defaultEvalParamsWithVersion(sb, txn, 1)
+ MaxGlobalSchemaEntries: 30,
+ MaxLocalSchemaEntries: 13,
+
+ EnableAppCostPooling: true,
+ EnableInnerTransactionPooling: true,
+ }
}
-func defaultEvalParams(sb *strings.Builder, txn *transactions.SignedTxn) EvalParams {
- return defaultEvalParamsWithVersion(sb, txn, LogicVersion)
+func defaultEvalParams(txn *transactions.SignedTxn) *EvalParams {
+ return defaultEvalParamsWithVersion(txn, LogicVersion)
}
-func benchmarkEvalParams(sb *strings.Builder, txn *transactions.SignedTxn) EvalParams {
- ep := defaultEvalParamsWithVersion(sb, txn, LogicVersion)
- ep.Proto.LogicSigMaxCost = 1000 * 1000
+func benchmarkEvalParams(txn *transactions.SignedTxn) *EvalParams {
+ ep := defaultEvalParamsWithVersion(txn, LogicVersion)
+ ep.Trace = nil // Tracing would slow down benchmarks
+ clone := *ep.Proto
+ bigBudget := uint64(1000 * 1000) // Allow long run times
+ clone.LogicSigMaxCost = bigBudget
+ clone.MaxAppProgramCost = int(bigBudget)
+ ep.Proto = &clone
+ ep.PooledApplicationBudget = &bigBudget
return ep
}
-func defaultEvalParamsWithVersion(sb *strings.Builder, txn *transactions.SignedTxn, version uint64) EvalParams {
- proto := defaultEvalProtoWithVersion(version)
-
- var pt *transactions.SignedTxn
+func defaultEvalParamsWithVersion(txn *transactions.SignedTxn, version uint64) *EvalParams {
+ ep := &EvalParams{
+ Proto: makeTestProtoV(version),
+ TxnGroup: make([]transactions.SignedTxnWithAD, 1),
+ Specials: &transactions.SpecialAddresses{},
+ Trace: &strings.Builder{},
+ }
if txn != nil {
- pt = txn
- } else {
- pt = &transactions.SignedTxn{}
+ ep.TxnGroup[0].SignedTxn = *txn
}
+ ep.reset()
+ return ep
+}
- ep := EvalParams{}
- ep.Proto = &proto
- ep.Txn = pt
- ep.PastSideEffects = MakePastSideEffects(5)
- ep.Specials = &transactions.SpecialAddresses{}
- if sb != nil { // have to do this since go's nil semantics: https://golang.org/doc/faq#nil_error
- ep.Trace = sb
+// reset puts an ep back into its original state. This is in *_test.go because
+// no real code should ever need this. EvalParams should be created to evaluate
+// a group, and then thrown away.
+func (ep *EvalParams) reset() {
+ if ep.Proto.EnableAppCostPooling {
+ budget := uint64(ep.Proto.MaxAppProgramCost)
+ ep.PooledApplicationBudget = &budget
}
- return ep
+ if ep.Proto.EnableInnerTransactionPooling {
+ inners := ep.Proto.MaxTxGroupSize * ep.Proto.MaxInnerTransactions
+ ep.pooledAllowedInners = &inners
+ }
+ ep.pastScratch = make([]*scratchSpace, ep.Proto.MaxTxGroupSize)
+ for i := range ep.TxnGroup {
+ ep.TxnGroup[i].ApplyData = transactions.ApplyData{}
+ }
+ ep.created = &resources{}
+ ep.appAddrCache = make(map[basics.AppIndex]basics.Address)
+ ep.Trace = &strings.Builder{}
}
func TestTooManyArgs(t *testing.T) {
@@ -131,8 +161,7 @@ func TestTooManyArgs(t *testing.T) {
txn.Lsig.Logic = ops.Program
args := [transactions.EvalMaxArgs + 1][]byte{}
txn.Lsig.Args = args[:]
- sb := strings.Builder{}
- pass, err := Eval(ops.Program, defaultEvalParams(&sb, &txn))
+ pass, err := EvalSignature(0, defaultEvalParams(&txn))
require.Error(t, err)
require.False(t, pass)
})
@@ -143,32 +172,23 @@ func TestEmptyProgram(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- pass, err := Eval(nil, defaultEvalParams(nil, nil))
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid program (empty)")
- require.False(t, pass)
+ testLogicBytes(t, nil, defaultEvalParams(nil), "invalid", "invalid program (empty)")
}
// TestMinTealVersionParamEval tests eval/check reading the MinTealVersion from the param
-func TestMinTealVersionParamEvalCheck(t *testing.T) {
+func TestMinTealVersionParamEvalCheckSignature(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- params := defaultEvalParams(nil, nil)
+ params := defaultEvalParams(nil)
version2 := uint64(rekeyingEnabledVersion)
params.MinTealVersion = &version2
program := make([]byte, binary.MaxVarintLen64)
// set the teal program version to 1
binary.PutUvarint(program, 1)
- err := Check(program, params)
- require.Contains(t, err.Error(), fmt.Sprintf("program version must be >= %d", appsEnabledVersion))
-
- // If the param is read correctly, the eval should fail
- pass, err := Eval(program, params)
- require.Error(t, err)
- require.Contains(t, err.Error(), fmt.Sprintf("program version must be >= %d", appsEnabledVersion))
- require.False(t, pass)
+ verErr := fmt.Sprintf("program version must be >= %d", appsEnabledVersion)
+ testAppBytes(t, program, params, verErr, verErr)
}
func TestTxnFieldToTealValue(t *testing.T) {
@@ -225,20 +245,8 @@ func TestWrongProtoVersion(t *testing.T) {
for v := uint64(1); v <= AssemblerMaxVersion; v++ {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
ops := testProg(t, "int 1", v)
- var txn transactions.SignedTxn
- txn.Lsig.Logic = ops.Program
- sb := strings.Builder{}
- proto := defaultEvalProto()
- proto.LogicSigVersion = 0
- ep := defaultEvalParams(&sb, &txn)
- ep.Proto = &proto
- err := Check(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "LogicSig not supported")
- pass, err := Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "LogicSig not supported")
- require.False(t, pass)
+ ep := defaultEvalParamsWithVersion(nil, 0)
+ testAppBytes(t, ops.Program, ep, "LogicSig not supported", "LogicSig not supported")
})
}
}
@@ -268,11 +276,10 @@ byte base64 5rZMNsevs5sULO+54aN+OvU6lQ503z2X+SSYUABIx7E=
var txn transactions.SignedTxn
txn.Lsig.Logic = ops.Program
txn.Lsig.Args = [][]byte{[]byte("=0\x97S\x85H\xe9\x91B\xfd\xdb;1\xf5Z\xaec?\xae\xf2I\x93\x08\x12\x94\xaa~\x06\x08\x849b")}
- sb := strings.Builder{}
- ep := defaultEvalParams(&sb, &txn)
- err := Check(ops.Program, ep)
+ ep := defaultEvalParams(&txn)
+ err := CheckSignature(0, ep)
require.NoError(t, err)
- pass, err := Eval(ops.Program, ep)
+ pass, err := EvalSignature(0, ep)
require.True(t, pass)
require.NoError(t, err)
})
@@ -329,70 +336,65 @@ func TestTLHC(t *testing.T) {
// right answer
txn.Lsig.Args = [][]byte{secret}
txn.Txn.FirstValid = 999999
- sb := strings.Builder{}
block := bookkeeping.Block{}
- ep := defaultEvalParams(&sb, &txn)
- err := Check(ops.Program, ep)
+ ep := defaultEvalParams(&txn)
+ err := CheckSignature(0, ep)
if err != nil {
t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ t.Log(ep.Trace.String())
}
require.NoError(t, err)
- pass, err := Eval(ops.Program, ep)
+ pass, err := EvalSignature(0, ep)
if pass {
t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ t.Log(ep.Trace.String())
}
require.False(t, pass)
isNotPanic(t, err)
txn.Txn.Receiver = a2
txn.Txn.CloseRemainderTo = a2
- sb = strings.Builder{}
- ep = defaultEvalParams(&sb, &txn)
- pass, err = Eval(ops.Program, ep)
+ ep = defaultEvalParams(&txn)
+ pass, err = EvalSignature(0, ep)
if !pass {
t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ t.Log(ep.Trace.String())
}
require.True(t, pass)
require.NoError(t, err)
txn.Txn.Receiver = a2
txn.Txn.CloseRemainderTo = a2
- sb = strings.Builder{}
txn.Txn.FirstValid = 1
- ep = defaultEvalParams(&sb, &txn)
- pass, err = Eval(ops.Program, ep)
+ ep = defaultEvalParams(&txn)
+ pass, err = EvalSignature(0, ep)
if pass {
t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ t.Log(ep.Trace.String())
}
require.False(t, pass)
isNotPanic(t, err)
txn.Txn.Receiver = a1
txn.Txn.CloseRemainderTo = a1
- sb = strings.Builder{}
txn.Txn.FirstValid = 999999
- ep = defaultEvalParams(&sb, &txn)
- pass, err = Eval(ops.Program, ep)
+ ep = defaultEvalParams(&txn)
+ pass, err = EvalSignature(0, ep)
if !pass {
t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ t.Log(ep.Trace.String())
}
require.True(t, pass)
require.NoError(t, err)
// wrong answer
txn.Lsig.Args = [][]byte{[]byte("=0\x97S\x85H\xe9\x91B\xfd\xdb;1\xf5Z\xaec?\xae\xf2I\x93\x08\x12\x94\xaa~\x06\x08\x849a")}
- sb = strings.Builder{}
block.BlockHeader.Round = 1
- ep = defaultEvalParams(&sb, &txn)
- pass, err = Eval(ops.Program, ep)
+ ep = defaultEvalParams(&txn)
+ pass, err = EvalSignature(0, ep)
if pass {
t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ t.Log(ep.Trace.String())
}
require.False(t, pass)
isNotPanic(t, err)
@@ -790,20 +792,8 @@ func TestTxnBadField(t *testing.T) {
t.Parallel()
program := []byte{0x01, 0x31, 0x7f}
- err := Check(program, defaultEvalParams(nil, nil))
- require.NoError(t, err) // TODO: Check should know the type stack was wrong
- sb := strings.Builder{}
- var txn transactions.SignedTxn
- txn.Lsig.Logic = program
- txn.Lsig.Args = nil
- pass, err := Eval(program, defaultEvalParams(&sb, &txn))
- if pass {
- t.Log(hex.EncodeToString(program))
- t.Log(sb.String())
- }
- require.Error(t, err)
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, program, defaultEvalParams(nil), "invalid txn field")
+ // TODO: Check should know the type stack was wrong
// test txn does not accept ApplicationArgs and Accounts
txnOpcode := OpsByName[LogicVersion]["txn"].Opcode
@@ -815,10 +805,7 @@ func TestTxnBadField(t *testing.T) {
ops := testProg(t, source, AssemblerMaxVersion)
require.Equal(t, txnaOpcode, ops.Program[1])
ops.Program[1] = txnOpcode
- pass, err = Eval(ops.Program, defaultEvalParams(&sb, &txn))
- require.Error(t, err)
- require.Contains(t, err.Error(), fmt.Sprintf("invalid txn field %d", field))
- require.False(t, pass)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), fmt.Sprintf("invalid txn field %d", field))
}
}
@@ -827,49 +814,16 @@ func TestGtxnBadIndex(t *testing.T) {
t.Parallel()
program := []byte{0x01, 0x33, 0x1, 0x01}
- err := Check(program, defaultEvalParams(nil, nil))
- require.NoError(t, err) // TODO: Check should know the type stack was wrong
- sb := strings.Builder{}
- var txn transactions.SignedTxn
- txn.Lsig.Logic = program
- txn.Lsig.Args = nil
- txgroup := make([]transactions.SignedTxn, 1)
- txgroup[0] = txn
- ep := defaultEvalParams(&sb, &txn)
- ep.TxnGroup = txgroup
- pass, err := Eval(program, ep)
- if pass {
- t.Log(hex.EncodeToString(program))
- t.Log(sb.String())
- }
- require.Error(t, err)
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, program, defaultEvalParams(nil), "gtxn lookup")
}
func TestGtxnBadField(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- program := []byte{0x01, 0x33, 0x0, 0x7f}
- err := Check(program, defaultEvalParams(nil, nil))
- require.NoError(t, err) // TODO: Check should know the type stack was wrong
- sb := strings.Builder{}
- var txn transactions.SignedTxn
- txn.Lsig.Logic = program
- txn.Lsig.Args = nil
- txgroup := make([]transactions.SignedTxn, 1)
- txgroup[0] = txn
- ep := defaultEvalParams(&sb, &txn)
- ep.TxnGroup = txgroup
- pass, err := Eval(program, ep)
- if pass {
- t.Log(hex.EncodeToString(program))
- t.Log(sb.String())
- }
- require.Error(t, err)
- require.False(t, pass)
- isNotPanic(t, err)
+ program := []byte{0x01, 0x33, 0x0, 127}
+ // TODO: Check should know the type stack was wrong
+ testLogicBytes(t, program, defaultEvalParams(nil), "invalid txn field 127")
// test gtxn does not accept ApplicationArgs and Accounts
txnOpcode := OpsByName[LogicVersion]["txn"].Opcode
@@ -881,10 +835,7 @@ func TestGtxnBadField(t *testing.T) {
ops := testProg(t, source, AssemblerMaxVersion)
require.Equal(t, txnaOpcode, ops.Program[1])
ops.Program[1] = txnOpcode
- pass, err = Eval(ops.Program, defaultEvalParams(&sb, &txn))
- require.Error(t, err)
- require.Contains(t, err.Error(), fmt.Sprintf("invalid txn field %d", field))
- require.False(t, pass)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), fmt.Sprintf("invalid txn field %d", field))
}
}
@@ -892,21 +843,8 @@ func TestGlobalBadField(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- program := []byte{0x01, 0x32, 0x7f}
- err := Check(program, defaultEvalParams(nil, nil))
- require.NoError(t, err) // Check does not validates opcode args
- sb := strings.Builder{}
- var txn transactions.SignedTxn
- txn.Lsig.Logic = program
- txn.Lsig.Args = nil
- pass, err := Eval(program, defaultEvalParams(&sb, &txn))
- if pass {
- t.Log(hex.EncodeToString(program))
- t.Log(sb.String())
- }
- require.Error(t, err)
- require.False(t, pass)
- isNotPanic(t, err)
+ program := []byte{0x01, 0x32, 127}
+ testLogicBytes(t, program, defaultEvalParams(nil), "invalid global field")
}
func TestArg(t *testing.T) {
@@ -919,11 +857,8 @@ func TestArg(t *testing.T) {
if v >= 5 {
source += "int 0; args; int 1; args; ==; assert; int 2; args; int 3; args; !=; assert"
}
- ops := testProg(t, source, v)
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(t, err)
+
var txn transactions.SignedTxn
- txn.Lsig.Logic = ops.Program
txn.Lsig.Args = [][]byte{
[]byte("aoeu"),
[]byte("aoeu"),
@@ -931,28 +866,22 @@ func TestArg(t *testing.T) {
[]byte("aoeu3"),
[]byte("aoeu4"),
}
- sb := strings.Builder{}
- pass, err := Eval(ops.Program, defaultEvalParams(&sb, &txn))
- if !pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.NoError(t, err)
- require.True(t, pass)
+ ops := testProg(t, source, v)
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn))
})
}
}
const globalV1TestProgram = `
global MinTxnFee
-int 123
+int 1001
==
global MinBalance
-int 1000000
+int 1001
==
&&
global MaxTxnLife
-int 999
+int 1500
==
&&
global ZeroAddress
@@ -981,7 +910,7 @@ int 0
>
&&
global CurrentApplicationID
-int 42
+int 888
==
&&
`
@@ -1009,7 +938,17 @@ byte 0x0706000000000000000000000000000000000000000000000000000000000000
`
const globalV6TestProgram = globalV5TestProgram + `
-// No new globals in v6
+global OpcodeBudget
+int 0
+>
+&&
+global CallerApplicationAddress
+global ZeroAddress
+==
+&&
+global CallerApplicationID
+!
+&&
`
func TestGlobal(t *testing.T) {
@@ -1019,81 +958,43 @@ func TestGlobal(t *testing.T) {
type desc struct {
lastField GlobalField
program string
- eval func([]byte, EvalParams) (bool, error)
- check func([]byte, EvalParams) error
}
+ // Associate the highest allowed global constant with each version's test program
tests := map[uint64]desc{
- 0: {GroupSize, globalV1TestProgram, Eval, Check},
- 1: {GroupSize, globalV1TestProgram, Eval, Check},
- 2: {
- CurrentApplicationID, globalV2TestProgram,
- EvalStateful, CheckStateful,
- },
- 3: {
- CreatorAddress, globalV3TestProgram,
- EvalStateful, CheckStateful,
- },
- 4: {
- CreatorAddress, globalV4TestProgram,
- EvalStateful, CheckStateful,
- },
- 5: {
- GroupID, globalV5TestProgram,
- EvalStateful, CheckStateful,
- },
- 6: {
- GroupID, globalV6TestProgram,
- EvalStateful, CheckStateful,
- },
+ 0: {GroupSize, globalV1TestProgram},
+ 1: {GroupSize, globalV1TestProgram},
+ 2: {CurrentApplicationID, globalV2TestProgram},
+ 3: {CreatorAddress, globalV3TestProgram},
+ 4: {CreatorAddress, globalV4TestProgram},
+ 5: {GroupID, globalV5TestProgram},
+ 6: {CallerApplicationAddress, globalV6TestProgram},
}
// tests keys are versions so they must be in a range 1..AssemblerMaxVersion plus zero version
require.LessOrEqual(t, len(tests), AssemblerMaxVersion+1)
+ require.Len(t, globalFieldSpecs, int(invalidGlobalField))
- ledger := logictest.MakeLedger(nil)
+ ledger := MakeLedger(nil)
addr, err := basics.UnmarshalChecksumAddress(testAddr)
require.NoError(t, err)
- ledger.NewApp(addr, basics.AppIndex(42), basics.AppParams{})
+ ledger.NewApp(addr, 888, basics.AppParams{})
for v := uint64(1); v <= AssemblerMaxVersion; v++ {
_, ok := tests[v]
require.True(t, ok)
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
last := tests[v].lastField
testProgram := tests[v].program
- check := tests[v].check
- eval := tests[v].eval
- for _, globalField := range GlobalFieldNames[:last] {
+ for _, globalField := range GlobalFieldNames[:last+1] {
if !strings.Contains(testProgram, globalField) {
t.Errorf("TestGlobal missing field %v", globalField)
}
}
- ops := testProg(t, testProgram, v)
- err := check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(t, err)
- var txn transactions.SignedTxn
- txn.Lsig.Logic = ops.Program
+
+ txn := transactions.SignedTxn{}
txn.Txn.Group = crypto.Digest{0x07, 0x06}
- txgroup := make([]transactions.SignedTxn, 1)
- txgroup[0] = txn
- sb := strings.Builder{}
- proto := config.ConsensusParams{
- MinTxnFee: 123,
- MinBalance: 1000000,
- MaxTxnLife: 999,
- LogicSigVersion: LogicVersion,
- LogicSigMaxCost: 20000,
- MaxAppProgramCost: 700,
- }
- ep := defaultEvalParams(&sb, &txn)
- ep.TxnGroup = txgroup
- ep.Proto = &proto
+
+ ep := defaultEvalParams(&txn)
ep.Ledger = ledger
- pass, err := eval(ops.Program, ep)
- if !pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.NoError(t, err)
- require.True(t, pass)
+ testApp(t, tests[v].program, ep)
})
}
}
@@ -1134,19 +1035,14 @@ int %s
==
&&`, symbol, string(tt))
ops := testProg(t, text, v)
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(t, err)
- var txn transactions.SignedTxn
+ txn := transactions.SignedTxn{}
txn.Txn.Type = tt
- sb := strings.Builder{}
- ep := defaultEvalParams(&sb, &txn)
- pass, err := Eval(ops.Program, ep)
- if !pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ if v < appsEnabledVersion && tt == protocol.ApplicationCallTx {
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn),
+ "program version must be", "program version must be")
+ return
}
- require.NoError(t, err)
- require.True(t, pass)
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn))
})
}
})
@@ -1283,7 +1179,7 @@ arg 8
`
const testTxnProgramTextV2 = testTxnProgramTextV1 + `txn ApplicationID
-int 123
+int 888
==
&&
txn OnCompletion
@@ -1459,6 +1355,26 @@ int 1
const testTxnProgramTextV6 = testTxnProgramTextV5 + `
assert
+txn CreatedAssetID
+int 0
+==
+assert
+
+txn CreatedApplicationID
+int 0
+==
+assert
+
+txn NumLogs
+int 2
+==
+assert
+
+txn Logs 1
+byte "prefilled"
+==
+assert
+
int 1
`
@@ -1486,7 +1402,7 @@ func makeSampleTxn() transactions.SignedTxn {
txn.Txn.AssetSender = txn.Txn.Receiver
txn.Txn.AssetReceiver = txn.Txn.CloseRemainderTo
txn.Txn.AssetCloseTo = txn.Txn.Sender
- txn.Txn.ApplicationID = basics.AppIndex(123)
+ txn.Txn.ApplicationID = basics.AppIndex(888)
txn.Txn.Accounts = make([]basics.Address, 1)
txn.Txn.Accounts[0] = txn.Txn.Receiver
rekeyToAddr := []byte("aoeuiaoeuiaoeuiaoeuiaoeuiaoeui05")
@@ -1530,32 +1446,31 @@ func makeSampleTxn() transactions.SignedTxn {
return txn
}
-func makeSampleTxnGroup(txn transactions.SignedTxn) []transactions.SignedTxn {
- txgroup := make([]transactions.SignedTxn, 2)
- txgroup[0] = txn
- txgroup[1].Txn.Amount.Raw = 42
- txgroup[1].Txn.Fee.Raw = 1066
- txgroup[1].Txn.FirstValid = 42
- txgroup[1].Txn.LastValid = 1066
- txgroup[1].Txn.Sender = txn.Txn.Receiver
- txgroup[1].Txn.Receiver = txn.Txn.Sender
- txgroup[1].Txn.ExtraProgramPages = 2
- return txgroup
+// makeSampleTxnGroup creates a sample txn group. If less than two transactions
+// are supplied, samples are used.
+func makeSampleTxnGroup(txns ...transactions.SignedTxn) []transactions.SignedTxn {
+ if len(txns) == 0 {
+ txns = []transactions.SignedTxn{makeSampleTxn()}
+ }
+ if len(txns) == 1 {
+ second := transactions.SignedTxn{}
+ second.Txn.Type = protocol.PaymentTx
+ second.Txn.Amount.Raw = 42
+ second.Txn.Fee.Raw = 1066
+ second.Txn.FirstValid = 42
+ second.Txn.LastValid = 1066
+ second.Txn.Sender = txns[0].Txn.Receiver
+ second.Txn.Receiver = txns[0].Txn.Sender
+ second.Txn.ExtraProgramPages = 2
+ txns = append(txns, second)
+ }
+ return txns
}
func TestTxn(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- for i, txnField := range TxnFieldNames {
- fs := txnFieldSpecByField[TxnField(i)]
- if !fs.effects && !strings.Contains(testTxnProgramTextV6, txnField) {
- if txnField != FirstValidTime.String() {
- t.Errorf("TestTxn missing field %v", txnField)
- }
- }
- }
-
tests := map[uint64]string{
1: testTxnProgramTextV1,
2: testTxnProgramTextV2,
@@ -1565,13 +1480,31 @@ func TestTxn(t *testing.T) {
6: testTxnProgramTextV6,
}
+ for i, txnField := range TxnFieldNames {
+ fs := txnFieldSpecByField[TxnField(i)]
+ // Ensure that each field appears, starting in the version it was introduced
+ for v := uint64(1); v <= uint64(LogicVersion); v++ {
+ if v < fs.version {
+ continue
+ }
+ if !strings.Contains(tests[v], txnField) {
+ if txnField == FirstValidTime.String() {
+ continue
+ }
+ // fields were introduced for itxn before they became available for txn
+ if v < txnEffectsVersion && fs.effects {
+ continue
+ }
+ t.Errorf("testTxnProgramTextV%d missing field %v", v, txnField)
+ }
+ }
+ }
+
clearOps := testProg(t, "int 1", 1)
for v, source := range tests {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
ops := testProg(t, source, v)
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(t, err)
txn := makeSampleTxn()
txn.Txn.ApprovalProgram = ops.Program
txn.Txn.ClearStateProgram = clearOps.Program
@@ -1597,17 +1530,28 @@ func TestTxn(t *testing.T) {
programHash[:],
clearProgramHash[:],
}
- sb := strings.Builder{}
- ep := defaultEvalParams(&sb, &txn)
- ep.Ledger = logictest.MakeLedger(nil)
- ep.GroupIndex = 3
- pass, err := Eval(ops.Program, ep)
- if !pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ // Since we test GroupIndex ==3, we need to fake up such a group
+ ep := defaultEvalParams(nil)
+ ep.TxnGroup = transactions.WrapSignedTxnsWithAD([]transactions.SignedTxn{txn, txn, txn, txn})
+ ep.TxnGroup[3].EvalDelta.Logs = []string{"x", "prefilled"}
+ if v < txnEffectsVersion {
+ testLogicFull(t, ops.Program, 3, ep)
+ } else {
+ // Starting in txnEffectsVersion we can't access all fields in Logic mode
+ testLogicFull(t, ops.Program, 3, ep, "not allowed in current mode")
+ // And the early tests use "arg" a lot - not allowed in stateful. So remove those tests.
+ lastArg := strings.Index(source, "arg 10\n==\n&&")
+ require.NotEqual(t, -1, lastArg)
+
+ appSafe := "int 1" + strings.Replace(source[lastArg+12:], `txn Sender
+int 0
+args
+==
+assert`, "", 1)
+
+ ops := testProg(t, appSafe, v)
+ testAppFull(t, ops.Program, 3, basics.AppIndex(888), ep)
}
- require.NoError(t, err)
- require.True(t, pass)
})
}
}
@@ -1656,44 +1600,22 @@ int 0
return
`
ops := testProg(t, cachedTxnProg, 2)
- sb := strings.Builder{}
- err := Check(ops.Program, defaultEvalParams(&sb, nil))
- if err != nil {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.NoError(t, err)
- txn := makeSampleTxn()
- txgroup := makeSampleTxnGroup(txn)
- txn.Lsig.Logic = ops.Program
- txid0 := txgroup[0].ID()
- txid1 := txgroup[1].ID()
- txn.Lsig.Args = [][]byte{
+
+ ep, _, _ := makeSampleEnv()
+ txid0 := ep.TxnGroup[0].ID()
+ txid1 := ep.TxnGroup[1].ID()
+ ep.TxnGroup[0].Lsig.Args = [][]byte{
txid0[:],
txid1[:],
}
- sb = strings.Builder{}
- ep := defaultEvalParams(&sb, &txn)
- ep.TxnGroup = txgroup
- pass, err := Eval(ops.Program, ep)
- if !pass || err != nil {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.NoError(t, err)
- require.True(t, pass)
+ testLogicBytes(t, ops.Program, ep)
}
func TestGaid(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- checkCreatableIDProg := `
-gaid 0
-int 100
-==
-`
- ops := testProg(t, checkCreatableIDProg, 4)
+ check0 := testProg(t, "gaid 0; int 100; ==", 4)
txn := makeSampleTxn()
txn.Txn.Type = protocol.ApplicationCallTx
txgroup := make([]transactions.SignedTxn, 3)
@@ -1701,53 +1623,40 @@ int 100
targetTxn := makeSampleTxn()
targetTxn.Txn.Type = protocol.AssetConfigTx
txgroup[0] = targetTxn
- sb := strings.Builder{}
- ledger := logictest.MakeLedger(nil)
- ledger.SetTrackedCreatable(0, basics.CreatableLocator{Index: 100})
- ep := defaultEvalParams(&sb, &txn)
- ep.Ledger = ledger
- ep.TxnGroup = txgroup
- ep.GroupIndex = 1
- pass, err := EvalStateful(ops.Program, ep)
+ ep := defaultEvalParams(nil)
+ ep.TxnGroup = transactions.WrapSignedTxnsWithAD(txgroup)
+ ep.Ledger = MakeLedger(nil)
+
+ // should fail when no creatable was created
+ _, err := EvalApp(check0.Program, 1, 0, ep)
+ require.Error(t, err)
+ require.Contains(t, err.Error(), "the txn did not create anything")
+
+ ep.TxnGroup[0].ApplyData.ConfigAsset = 100
+ pass, err := EvalApp(check0.Program, 1, 0, ep)
if !pass || err != nil {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ t.Log(ep.Trace.String())
}
require.NoError(t, err)
require.True(t, pass)
// should fail when accessing future transaction in group
- futureCreatableIDProg := `
-gaid 2
-int 0
->
-`
-
- ops = testProg(t, futureCreatableIDProg, 4)
- _, err = EvalStateful(ops.Program, ep)
+ check2 := testProg(t, "gaid 2; int 0; >", 4)
+ _, err = EvalApp(check2.Program, 1, 0, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "gaid can't get creatable ID of txn ahead of the current one")
// should fail when accessing self
- ep.GroupIndex = 0
- ops = testProg(t, checkCreatableIDProg, 4)
- _, err = EvalStateful(ops.Program, ep)
+ _, err = EvalApp(check0.Program, 0, 0, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "gaid is only for accessing creatable IDs of previous txns")
- ep.GroupIndex = 1
// should fail on non-creatable
ep.TxnGroup[0].Txn.Type = protocol.PaymentTx
- _, err = EvalStateful(ops.Program, ep)
+ _, err = EvalApp(check0.Program, 1, 0, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "can't use gaid on txn that is not an app call nor an asset config txn")
ep.TxnGroup[0].Txn.Type = protocol.AssetConfigTx
-
- // should fail when no creatable was created
- ledger.SetTrackedCreatable(0, basics.CreatableLocator{})
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "the txn did not create anything")
}
func TestGtxn(t *testing.T) {
@@ -1873,8 +1782,8 @@ gtxn 0 Sender
txn.Txn.SelectionPK[:],
txn.Txn.Note,
}
- ep := defaultEvalParams(nil, &txn)
- ep.TxnGroup = makeSampleTxnGroup(txn)
+ ep := defaultEvalParams(&txn)
+ ep.TxnGroup = transactions.WrapSignedTxnsWithAD(makeSampleTxnGroup(txn))
testLogic(t, source, v, ep)
if v >= 3 {
gtxnsProg := strings.ReplaceAll(source, "gtxn 0", "int 0; gtxns")
@@ -1889,27 +1798,64 @@ gtxn 0 Sender
}
}
-func testLogic(t *testing.T, program string, v uint64, ep EvalParams, problems ...string) {
+func testLogic(t *testing.T, program string, v uint64, ep *EvalParams, problems ...string) {
+ t.Helper()
ops := testProg(t, program, v)
+ testLogicBytes(t, ops.Program, ep, problems...)
+}
+
+func testLogicBytes(t *testing.T, program []byte, ep *EvalParams, problems ...string) {
+ t.Helper()
+ testLogicFull(t, program, 0, ep, problems...)
+}
+
+func testLogicFull(t *testing.T, program []byte, gi int, ep *EvalParams, problems ...string) {
+ t.Helper()
+
+ var checkProblem string
+ var evalProblem string
+ switch len(problems) {
+ case 2:
+ checkProblem = problems[0]
+ evalProblem = problems[1]
+ case 1:
+ evalProblem = problems[0]
+ case 0:
+ default:
+ require.Fail(t, "Misused testLogic: %d problems", len(problems))
+ }
+
sb := &strings.Builder{}
ep.Trace = sb
- ep.Txn.Lsig.Logic = ops.Program
- err := Check(ops.Program, ep)
- if err != nil {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.NoError(t, err)
- pass, err := Eval(ops.Program, ep)
- if len(problems) == 0 {
+ ep.TxnGroup[0].Lsig.Logic = program
+ err := CheckSignature(gi, ep)
+ if checkProblem == "" {
require.NoError(t, err, sb.String())
- require.True(t, pass, sb.String())
} else {
- require.Error(t, err, sb.String())
- for _, problem := range problems {
- require.Contains(t, err.Error(), problem)
- }
+ require.Error(t, err, "Check\n%s\nExpected: %v", sb, checkProblem)
+ require.Contains(t, err.Error(), checkProblem)
+ }
+
+ // We continue on to check Eval() of things that failed Check() because it's
+ // a nice confirmation that Check() is usually stricter than Eval(). This
+ // may mean that the problems argument is often duplicated, but this seems
+ // the best way to be concise about all sorts of tests.
+
+ pass, err := EvalSignature(gi, ep)
+ if evalProblem == "" {
+ require.NoError(t, err, "Eval%s\nExpected: PASS", sb)
+ assert.True(t, pass, "Eval%s\nExpected: PASS", sb)
+ return
+ }
+
+ // There is an evalProblem to check. REJECT is special and only means that
+ // the app didn't accept. Maybe it's an error, maybe it's just !pass.
+ if evalProblem == "REJECT" {
+ require.True(t, err != nil || !pass, "Eval%s\nExpected: REJECT", sb)
+ } else {
+ require.Error(t, err, "Eval%s\nExpected: %v", sb, evalProblem)
+ require.Contains(t, err.Error(), evalProblem)
}
}
@@ -1925,43 +1871,30 @@ txna ApplicationArgs 0
var txn transactions.SignedTxn
txn.Txn.Accounts = make([]basics.Address, 1)
txn.Txn.Accounts[0] = txn.Txn.Sender
- txn.Txn.ApplicationArgs = make([][]byte, 1)
- txn.Txn.ApplicationArgs[0] = []byte(protocol.PaymentTx)
- txgroup := make([]transactions.SignedTxn, 1)
- txgroup[0] = txn
- ep := defaultEvalParams(nil, &txn)
- ep.TxnGroup = txgroup
- _, err := Eval(ops.Program, ep)
- require.NoError(t, err)
+ txn.Txn.ApplicationArgs = [][]byte{txn.Txn.Sender[:]}
+ ep := defaultEvalParams(&txn)
+ testLogicBytes(t, ops.Program, ep)
// modify txn field
saved := ops.Program[2]
ops.Program[2] = 0x01
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "txna unsupported field")
+ testLogicBytes(t, ops.Program, ep, "unsupported array field")
// modify txn field to unknown one
ops.Program[2] = 99
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid txn field 99")
+ testLogicBytes(t, ops.Program, ep, "invalid txn field 99")
// modify txn array index
ops.Program[2] = saved
saved = ops.Program[3]
ops.Program[3] = 0x02
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid Accounts index")
+ testLogicBytes(t, ops.Program, ep, "invalid Accounts index")
// modify txn array index in the second opcode
ops.Program[3] = saved
saved = ops.Program[6]
ops.Program[6] = 0x01
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid ApplicationArgs index")
+ testLogicBytes(t, ops.Program, ep, "invalid ApplicationArgs index")
ops.Program[6] = saved
// check special case: Account 0 == Sender
@@ -1973,48 +1906,36 @@ txn Sender
ops2 := testProg(t, source, AssemblerMaxVersion)
var txn2 transactions.SignedTxn
copy(txn2.Txn.Sender[:], []byte("aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00"))
- ep2 := defaultEvalParams(nil, &txn2)
- pass, err := Eval(ops2.Program, ep2)
- require.NoError(t, err)
- require.True(t, pass)
+ ep2 := defaultEvalParams(&txn2)
+ testLogicBytes(t, ops2.Program, ep2)
// check gtxna
source = `gtxna 0 Accounts 1
txna ApplicationArgs 0
==`
ops = testProg(t, source, AssemblerMaxVersion)
- require.NoError(t, err)
- _, err = Eval(ops.Program, ep)
- require.NoError(t, err)
+ testLogicBytes(t, ops.Program, ep)
// modify gtxn index
saved = ops.Program[2]
ops.Program[2] = 0x01
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "gtxna lookup TxnGroup[1] but it only has 1")
+ testLogicBytes(t, ops.Program, ep, "gtxna lookup TxnGroup[1] but it only has 1")
// modify gtxn field
ops.Program[2] = saved
saved = ops.Program[3]
ops.Program[3] = 0x01
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "gtxna unsupported field")
+ testLogicBytes(t, ops.Program, ep, "unsupported array field")
// modify gtxn field to unknown one
ops.Program[3] = 99
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid txn field 99")
+ testLogicBytes(t, ops.Program, ep, "invalid txn field 99")
// modify gtxn array index
ops.Program[3] = saved
saved = ops.Program[4]
ops.Program[4] = 0x02
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid Accounts index")
+ testLogicBytes(t, ops.Program, ep, "invalid Accounts index")
ops.Program[4] = saved
// check special case: Account 0 == Sender
@@ -2026,13 +1947,8 @@ txn Sender
ops3 := testProg(t, source, AssemblerMaxVersion)
var txn3 transactions.SignedTxn
copy(txn2.Txn.Sender[:], []byte("aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00"))
- txgroup3 := make([]transactions.SignedTxn, 1)
- txgroup3[0] = txn3
- ep3 := defaultEvalParams(nil, &txn3)
- ep3.TxnGroup = txgroup3
- pass, err = Eval(ops3.Program, ep3)
- require.NoError(t, err)
- require.True(t, pass)
+ ep3 := defaultEvalParams(&txn3)
+ testLogicBytes(t, ops3.Program, ep3)
}
// check empty values in ApplicationArgs and Account
@@ -2050,42 +1966,24 @@ int 0
var txn transactions.SignedTxn
txn.Txn.ApplicationArgs = make([][]byte, 1)
txn.Txn.ApplicationArgs[0] = []byte("")
- txgroup := make([]transactions.SignedTxn, 1)
- txgroup[0] = txn
- ep := defaultEvalParams(nil, &txn)
- ep.TxnGroup = txgroup
- pass, err := Eval(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn))
+
txn.Txn.ApplicationArgs[0] = nil
- txgroup[0] = txn
- ep.TxnGroup = txgroup
- pass, err = Eval(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn))
source2 := `txna Accounts 1
global ZeroAddress
==
`
- ops2 := testProg(t, source2, AssemblerMaxVersion)
+ ops = testProg(t, source2, AssemblerMaxVersion)
var txn2 transactions.SignedTxn
txn2.Txn.Accounts = make([]basics.Address, 1)
txn2.Txn.Accounts[0] = basics.Address{}
- txgroup2 := make([]transactions.SignedTxn, 1)
- txgroup2[0] = txn2
- ep2 := defaultEvalParams(nil, &txn2)
- ep2.TxnGroup = txgroup2
- pass, err = Eval(ops2.Program, ep2)
- require.NoError(t, err)
- require.True(t, pass)
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn2))
+
txn2.Txn.Accounts = make([]basics.Address, 1)
- txgroup2[0] = txn
- ep2.TxnGroup = txgroup2
- pass, err = Eval(ops2.Program, ep2)
- require.NoError(t, err)
- require.True(t, pass)
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn2))
}
func TestTxnas(t *testing.T) {
@@ -2103,14 +2001,9 @@ txnas ApplicationArgs
var txn transactions.SignedTxn
txn.Txn.Accounts = make([]basics.Address, 1)
txn.Txn.Accounts[0] = txn.Txn.Sender
- txn.Txn.ApplicationArgs = make([][]byte, 1)
- txn.Txn.ApplicationArgs[0] = []byte(protocol.PaymentTx)
- txgroup := make([]transactions.SignedTxn, 1)
- txgroup[0] = txn
- ep := defaultEvalParams(nil, &txn)
- ep.TxnGroup = txgroup
- _, err := Eval(ops.Program, ep)
- require.NoError(t, err)
+ txn.Txn.ApplicationArgs = [][]byte{txn.Txn.Sender[:]}
+ ep := defaultEvalParams(&txn)
+ testLogicBytes(t, ops.Program, ep)
// check special case: Account 0 == Sender
// even without any additional context
@@ -2119,13 +2012,10 @@ txnas Accounts
txn Sender
==
`
- ops2 := testProg(t, source, AssemblerMaxVersion)
+ ops = testProg(t, source, AssemblerMaxVersion)
var txn2 transactions.SignedTxn
copy(txn2.Txn.Sender[:], []byte("aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00"))
- ep2 := defaultEvalParams(nil, &txn2)
- pass, err := Eval(ops2.Program, ep2)
- require.NoError(t, err)
- require.True(t, pass)
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn2))
// check gtxnas
source = `int 1
@@ -2133,9 +2023,7 @@ gtxnas 0 Accounts
txna ApplicationArgs 0
==`
ops = testProg(t, source, AssemblerMaxVersion)
- require.NoError(t, err)
- _, err = Eval(ops.Program, ep)
- require.NoError(t, err)
+ testLogicBytes(t, ops.Program, ep)
// check special case: Account 0 == Sender
// even without any additional context
@@ -2144,16 +2032,10 @@ gtxnas 0 Accounts
txn Sender
==
`
- ops3 := testProg(t, source, AssemblerMaxVersion)
+ ops = testProg(t, source, AssemblerMaxVersion)
var txn3 transactions.SignedTxn
- copy(txn2.Txn.Sender[:], []byte("aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00"))
- txgroup3 := make([]transactions.SignedTxn, 1)
- txgroup3[0] = txn3
- ep3 := defaultEvalParams(nil, &txn3)
- ep3.TxnGroup = txgroup3
- pass, err = Eval(ops3.Program, ep3)
- require.NoError(t, err)
- require.True(t, pass)
+ copy(txn3.Txn.Sender[:], []byte("aoeuiaoeuiaoeuiaoeuiaoeuiaoeui00"))
+ testLogicBytes(t, ops.Program, defaultEvalParams(&txn3))
// check gtxnsas
source = `int 0
@@ -2162,9 +2044,7 @@ gtxnsas Accounts
txna ApplicationArgs 0
==`
ops = testProg(t, source, AssemblerMaxVersion)
- require.NoError(t, err)
- _, err = Eval(ops.Program, ep)
- require.NoError(t, err)
+ testLogicBytes(t, ops.Program, ep)
}
func TestBitOps(t *testing.T) {
@@ -2244,17 +2124,17 @@ func TestSubstringFlop(t *testing.T) {
// fails in compiler
testProg(t, `byte 0xf000000000000000
substring
-len`, 2, expect{2, "substring expects 2 immediate arguments"})
+len`, 2, Expect{2, "substring expects 2 immediate arguments"})
// fails in compiler
testProg(t, `byte 0xf000000000000000
substring 1
-len`, 2, expect{2, "substring expects 2 immediate arguments"})
+len`, 2, Expect{2, "substring expects 2 immediate arguments"})
// fails in compiler
testProg(t, `byte 0xf000000000000000
substring 4 2
-len`, 2, expect{2, "substring end is before start"})
+len`, 2, Expect{2, "substring end is before start"})
// fails at runtime
testPanics(t, `byte 0xf000000000000000
@@ -2307,11 +2187,11 @@ func TestExtractFlop(t *testing.T) {
// fails in compiler
testProg(t, `byte 0xf000000000000000
extract
- len`, 5, expect{2, "extract expects 2 immediate arguments"})
+ len`, 5, Expect{2, "extract expects 2 immediate arguments"})
testProg(t, `byte 0xf000000000000000
extract 1
- len`, 5, expect{2, "extract expects 2 immediate arguments"})
+ len`, 5, Expect{2, "extract expects 2 immediate arguments"})
// fails at runtime
err := testPanics(t, `byte 0xf000000000000000
@@ -2432,6 +2312,7 @@ func TestGload(t *testing.T) {
// for simple app-call-only transaction groups
type scratchTestCase struct {
tealSources []string
+ errTxn int
errContains string
}
@@ -2480,6 +2361,7 @@ store 0
int 1
`,
},
+ errTxn: 0,
errContains: "can't use gload on self, use load instead",
}
@@ -2494,67 +2376,29 @@ int 2
store 0
int 1`,
},
+ errTxn: 0,
errContains: "gload can't get future scratch space from txn with index 1",
}
cases := []scratchTestCase{
simpleCase, multipleTxnCase, selfCase, laterTxnSlotCase,
}
- proto := defaultEvalProtoWithVersion(LogicVersion)
for i, testCase := range cases {
t.Run(fmt.Sprintf("i=%d", i), func(t *testing.T) {
sources := testCase.tealSources
- // Assemble ops
- opsList := make([]*OpStream, len(sources))
- for j, source := range sources {
- ops := testProg(t, source, AssemblerMaxVersion)
- opsList[j] = ops
- }
- // Initialize txgroup and cxgroup
+ // Initialize txgroup
txgroup := make([]transactions.SignedTxn, len(sources))
for j := range txgroup {
- txgroup[j] = transactions.SignedTxn{
- Txn: transactions.Transaction{
- Type: protocol.ApplicationCallTx,
- },
- }
+ txgroup[j].Txn.Type = protocol.ApplicationCallTx
}
- // Construct EvalParams
- pastSideEffects := MakePastSideEffects(len(sources))
- epList := make([]EvalParams, len(sources))
- for j := range sources {
- epList[j] = EvalParams{
- Proto: &proto,
- Txn: &txgroup[j],
- TxnGroup: txgroup,
- GroupIndex: uint64(j),
- PastSideEffects: pastSideEffects,
- }
- }
-
- // Evaluate app calls
- shouldErr := testCase.errContains != ""
- didPass := true
- for j, ops := range opsList {
- pass, err := EvalStateful(ops.Program, epList[j])
-
- // Confirm it errors or that the error message is the expected one
- if !shouldErr {
- require.NoError(t, err)
- } else if shouldErr && err != nil {
- require.Error(t, err)
- require.Contains(t, err.Error(), testCase.errContains)
- }
-
- if !pass {
- didPass = false
- }
+ if testCase.errContains != "" {
+ testApps(t, sources, txgroup, LogicVersion, MakeLedger(nil), Expect{testCase.errTxn, testCase.errContains})
+ } else {
+ testApps(t, sources, txgroup, LogicVersion, MakeLedger(nil))
}
-
- require.Equal(t, !shouldErr, didPass)
})
}
@@ -2588,42 +2432,30 @@ int 1`,
failCases := []failureCase{nonAppCall, logicSigCall}
for j, failCase := range failCases {
t.Run(fmt.Sprintf("j=%d", j), func(t *testing.T) {
- source := "gload 0 0"
- ops := testProg(t, source, AssemblerMaxVersion)
+ program := testProg(t, "gload 0 0", AssemblerMaxVersion).Program
- // Initialize txgroup and cxgroup
- txgroup := make([]transactions.SignedTxn, 2)
- txgroup[0] = failCase.firstTxn
- txgroup[1] = transactions.SignedTxn{}
-
- // Construct EvalParams
- pastSideEffects := MakePastSideEffects(2)
- epList := make([]EvalParams, 2)
- for j := range epList {
- epList[j] = EvalParams{
- Proto: &proto,
- Txn: &txgroup[j],
- TxnGroup: txgroup,
- GroupIndex: uint64(j),
- PastSideEffects: pastSideEffects,
- }
+ txgroup := []transactions.SignedTxnWithAD{
+ {SignedTxn: failCase.firstTxn},
+ {},
+ }
+
+ ep := &EvalParams{
+ Proto: makeTestProto(),
+ TxnGroup: txgroup,
+ pastScratch: make([]*scratchSpace, 2),
}
- // Evaluate app call
- var err error
switch failCase.runMode {
case runModeApplication:
- _, err = EvalStateful(ops.Program, epList[1])
+ testAppBytes(t, program, ep, failCase.errContains)
default:
- _, err = Eval(ops.Program, epList[1])
+ testLogicBytes(t, program, ep, failCase.errContains, failCase.errContains)
}
-
- require.Error(t, err)
- require.Contains(t, err.Error(), failCase.errContains)
})
}
}
+// TestGloads tests gloads and gloadss
func TestGloads(t *testing.T) {
partitiontest.PartitionTest(t)
@@ -2643,51 +2475,35 @@ int 0
gloads 0
byte "txn 1"
==
+assert
int 1
gloads 1
byte "txn 2"
==
-&&`
+assert
+int 0
+int 0
+gloadss
+byte "txn 1"
+==
+assert
+int 1
+int 1
+gloadss
+byte "txn 2"
+==
+assert
+int 1
+`
sources := []string{source1, source2, source3}
- proto := defaultEvalProtoWithVersion(LogicVersion)
- // Assemble ops
- opsList := make([]*OpStream, len(sources))
- for j, source := range sources {
- ops := testProg(t, source, AssemblerMaxVersion)
- opsList[j] = ops
- }
-
- // Initialize txgroup and cxgroup
txgroup := make([]transactions.SignedTxn, len(sources))
for j := range txgroup {
- txgroup[j] = transactions.SignedTxn{
- Txn: transactions.Transaction{
- Type: protocol.ApplicationCallTx,
- },
- }
+ txgroup[j].Txn.Type = protocol.ApplicationCallTx
}
- // Construct EvalParams
- pastSideEffects := MakePastSideEffects(len(sources))
- epList := make([]EvalParams, len(sources))
- for j := range sources {
- epList[j] = EvalParams{
- Proto: &proto,
- Txn: &txgroup[j],
- TxnGroup: txgroup,
- GroupIndex: uint64(j),
- PastSideEffects: pastSideEffects,
- }
- }
-
- // Evaluate app calls
- for j, ops := range opsList {
- pass, err := EvalStateful(ops.Program, epList[j])
- require.NoError(t, err)
- require.True(t, pass)
- }
+ testApps(t, sources, txgroup, LogicVersion, MakeLedger(nil))
}
const testCompareProgramText = `int 35
@@ -2775,35 +2591,25 @@ func TestSlowLogic(t *testing.T) {
// v1overspend fails (on v1)
ops := testProg(t, v1overspend, 1)
- err := Check(ops.Program, defaultEvalParamsWithVersion(nil, nil, 1))
- require.Error(t, err)
- require.Contains(t, err.Error(), "static cost")
- // v2overspend passes Check, even on v2 proto, because cost is "grandfathered"
+ // We should never Eval this after it fails Check(), but nice to see it also fails.
+ testLogicBytes(t, ops.Program, defaultEvalParamsWithVersion(nil, 1),
+ "static cost", "dynamic cost")
+ // v2overspend passes Check, even on v2 proto, because the old low cost is "grandfathered"
ops = testProg(t, v2overspend, 1)
- err = Check(ops.Program, defaultEvalParamsWithVersion(nil, nil, 2))
- require.NoError(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParamsWithVersion(nil, 2))
// even the shorter, v2overspend, fails when compiled as v2 code
ops = testProg(t, v2overspend, 2)
- err = Check(ops.Program, defaultEvalParamsWithVersion(nil, nil, 2))
- require.Error(t, err)
- require.Contains(t, err.Error(), "static cost")
+ testLogicBytes(t, ops.Program, defaultEvalParamsWithVersion(nil, 2),
+ "static cost", "dynamic cost")
// in v4 cost is still 134, but only matters in Eval, not Check, so both fail there
- ep4 := defaultEvalParamsWithVersion(nil, nil, 4)
+ ep4 := defaultEvalParamsWithVersion(nil, 4)
ops = testProg(t, v1overspend, 4)
- err = Check(ops.Program, ep4)
- require.NoError(t, err)
- _, err = Eval(ops.Program, ep4)
- require.Error(t, err)
- require.Contains(t, err.Error(), "dynamic cost")
+ testLogicBytes(t, ops.Program, ep4, "dynamic cost")
ops = testProg(t, v2overspend, 4)
- err = Check(ops.Program, ep4)
- require.NoError(t, err)
- _, err = Eval(ops.Program, ep4)
- require.Error(t, err)
- require.Contains(t, err.Error(), "dynamic cost")
+ testLogicBytes(t, ops.Program, ep4, "dynamic cost")
}
func isNotPanic(t *testing.T, err error) {
@@ -2823,16 +2629,7 @@ func TestStackUnderflow(t *testing.T) {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
ops := testProg(t, `int 1`, v)
ops.Program = append(ops.Program, 0x08) // +
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(t, err)
- sb := strings.Builder{}
- pass, err := Eval(ops.Program, defaultEvalParams(&sb, nil))
- if pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), "stack underflow")
})
}
}
@@ -2845,16 +2642,7 @@ func TestWrongStackTypeRuntime(t *testing.T) {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
ops := testProg(t, `int 1`, v)
ops.Program = append(ops.Program, 0x01, 0x15) // sha256, len
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(t, err)
- sb := strings.Builder{}
- pass, err := Eval(ops.Program, defaultEvalParams(&sb, nil))
- if pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), "sha256 arg 0 wanted")
})
}
}
@@ -2867,16 +2655,8 @@ func TestEqMismatch(t *testing.T) {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
ops := testProg(t, `byte 0x1234; int 1`, v)
ops.Program = append(ops.Program, 0x12) // ==
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(t, err) // TODO: Check should know the type stack was wrong
- sb := strings.Builder{}
- pass, err := Eval(ops.Program, defaultEvalParams(&sb, nil))
- if pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), "cannot compare")
+ // TODO: Check should know the type stack was wrong
})
}
}
@@ -2889,16 +2669,7 @@ func TestNeqMismatch(t *testing.T) {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
ops := testProg(t, `byte 0x1234; int 1`, v)
ops.Program = append(ops.Program, 0x13) // !=
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(t, err) // TODO: Check should know the type stack was wrong
- sb := strings.Builder{}
- pass, err := Eval(ops.Program, defaultEvalParams(&sb, nil))
- if pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), "cannot compare")
})
}
}
@@ -2911,16 +2682,7 @@ func TestWrongStackTypeRuntime2(t *testing.T) {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
ops := testProg(t, `byte 0x1234; int 1`, v)
ops.Program = append(ops.Program, 0x08) // +
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(t, err)
- sb := strings.Builder{}
- pass, _ := Eval(ops.Program, defaultEvalParams(&sb, nil))
- if pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), "+ arg 0 wanted")
})
}
}
@@ -2938,16 +2700,7 @@ func TestIllegalOp(t *testing.T) {
break
}
}
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- sb := strings.Builder{}
- pass, err := Eval(ops.Program, defaultEvalParams(&sb, nil))
- if pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), "illegal opcode", "illegal opcode")
})
}
}
@@ -2965,16 +2718,8 @@ int 1
`, v)
// cut two last bytes - intc_1 and last byte of bnz
ops.Program = ops.Program[:len(ops.Program)-2]
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- sb := strings.Builder{}
- pass, err := Eval(ops.Program, defaultEvalParams(&sb, nil))
- if pass {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
- }
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil),
+ "bnz program ends short", "bnz program ends short")
})
}
}
@@ -2988,13 +2733,9 @@ intc 0
intc 0
bnz done
done:`, 2)
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(t, err)
- sb := strings.Builder{}
- pass, err := Eval(ops.Program, defaultEvalParams(&sb, nil))
- require.NoError(t, err)
- require.True(t, pass)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil))
}
+
func TestShortBytecblock(t *testing.T) {
partitiontest.PartitionTest(t)
@@ -3007,17 +2748,8 @@ func TestShortBytecblock(t *testing.T) {
for i := 2; i < len(fullops.Program); i++ {
program := fullops.Program[:i]
t.Run(hex.EncodeToString(program), func(t *testing.T) {
- err := Check(program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- isNotPanic(t, err)
- sb := strings.Builder{}
- pass, err := Eval(program, defaultEvalParams(&sb, nil))
- if pass {
- t.Log(hex.EncodeToString(program))
- t.Log(sb.String())
- }
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, program, defaultEvalParams(nil),
+ "bytecblock", "bytecblock")
})
}
})
@@ -3038,17 +2770,7 @@ func TestShortBytecblock2(t *testing.T) {
t.Run(src, func(t *testing.T) {
program, err := hex.DecodeString(src)
require.NoError(t, err)
- err = Check(program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- isNotPanic(t, err)
- sb := strings.Builder{}
- pass, err := Eval(program, defaultEvalParams(&sb, nil))
- if pass {
- t.Log(hex.EncodeToString(program))
- t.Log(sb.String())
- }
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, program, defaultEvalParams(nil), "bytecblock", "bytecblock")
})
}
}
@@ -3068,8 +2790,7 @@ func TestPanic(t *testing.T) {
log := logging.TestingLog(t)
for v := uint64(1); v <= AssemblerMaxVersion; v++ {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
- ops, err := AssembleStringWithVersion(`int 1`, v)
- require.NoError(t, err)
+ ops := testProg(t, `int 1`, v)
var hackedOpcode int
var oldSpec OpSpec
for opcode, spec := range opsByOpcode[v] {
@@ -3083,10 +2804,10 @@ func TestPanic(t *testing.T) {
break
}
}
- sb := strings.Builder{}
- params := defaultEvalParams(&sb, nil)
- params.Logger = log
- err = Check(ops.Program, params)
+ params := defaultEvalParams(nil)
+ params.logger = log
+ params.TxnGroup[0].Lsig.Logic = ops.Program
+ err := CheckSignature(0, params)
require.Error(t, err)
if pe, ok := err.(PanicError); ok {
require.Equal(t, panicString, pe.PanicValue)
@@ -3095,15 +2816,14 @@ func TestPanic(t *testing.T) {
} else {
t.Errorf("expected PanicError object but got %T %#v", err, err)
}
- sb = strings.Builder{}
var txn transactions.SignedTxn
txn.Lsig.Logic = ops.Program
- params = defaultEvalParams(&sb, &txn)
- params.Logger = log
- pass, err := Eval(ops.Program, params)
+ params = defaultEvalParams(&txn)
+ params.logger = log
+ pass, err := EvalSignature(0, params)
if pass {
t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ t.Log(params.Trace.String())
}
require.False(t, pass)
if pe, ok := err.(PanicError); ok {
@@ -3124,13 +2844,8 @@ func TestProgramTooNew(t *testing.T) {
t.Parallel()
var program [12]byte
vlen := binary.PutUvarint(program[:], EvalMaxVersion+1)
- err := Check(program[:vlen], defaultEvalParams(nil, nil))
- require.Error(t, err)
- isNotPanic(t, err)
- pass, err := Eval(program[:vlen], defaultEvalParams(nil, nil))
- require.Error(t, err)
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, program[:vlen], defaultEvalParams(nil),
+ "greater than max supported", "greater than max supported")
}
func TestInvalidVersion(t *testing.T) {
@@ -3139,13 +2854,7 @@ func TestInvalidVersion(t *testing.T) {
t.Parallel()
program, err := hex.DecodeString("ffffffffffffffffffffffff")
require.NoError(t, err)
- err = Check(program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- isNotPanic(t, err)
- pass, err := Eval(program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, program, defaultEvalParams(nil), "invalid version", "invalid version")
}
func TestProgramProtoForbidden(t *testing.T) {
@@ -3154,18 +2863,11 @@ func TestProgramProtoForbidden(t *testing.T) {
t.Parallel()
var program [12]byte
vlen := binary.PutUvarint(program[:], EvalMaxVersion)
- proto := config.ConsensusParams{
+ ep := defaultEvalParams(nil)
+ ep.Proto = &config.ConsensusParams{
LogicSigVersion: EvalMaxVersion - 1,
}
- ep := EvalParams{}
- ep.Proto = &proto
- err := Check(program[:vlen], ep)
- require.Error(t, err)
- ep.Txn = &transactions.SignedTxn{}
- pass, err := Eval(program[:vlen], ep)
- require.Error(t, err)
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, program[:vlen], ep, "greater than protocol", "greater than protocol")
}
func TestMisalignedBranch(t *testing.T) {
@@ -3174,41 +2876,29 @@ func TestMisalignedBranch(t *testing.T) {
t.Parallel()
for v := uint64(1); v <= AssemblerMaxVersion; v++ {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
- ops, err := AssembleStringWithVersion(`int 1
+ ops := testProg(t, `int 1
bnz done
bytecblock 0x01234576 0xababcdcd 0xf000baad
done:
int 1`, v)
- require.NoError(t, err)
//t.Log(hex.EncodeToString(program))
canonicalProgramString := mutateProgVersion(v, "01200101224000112603040123457604ababcdcd04f000baad22")
canonicalProgramBytes, err := hex.DecodeString(canonicalProgramString)
require.NoError(t, err)
require.Equal(t, ops.Program, canonicalProgramBytes)
ops.Program[7] = 3 // clobber the branch offset to be in the middle of the bytecblock
- err = Check(ops.Program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- require.Contains(t, err.Error(), "aligned")
- pass, err := Eval(ops.Program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- require.False(t, pass)
- isNotPanic(t, err)
+ // Since Eval() doesn't know the jump is bad, we reject "by luck"
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), "aligned", "REJECT")
// back branches are checked differently, so test misaligned back branch
ops.Program[6] = 0xff // Clobber the two bytes of offset with 0xff 0xff = -1
ops.Program[7] = 0xff // That jumps into the offset itself (pc + 3 -1)
- err = Check(ops.Program, defaultEvalParams(nil, nil))
- require.Error(t, err)
if v < backBranchEnabledVersion {
- require.Contains(t, err.Error(), "negative branch")
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), "negative branch", "negative branch")
} else {
- require.Contains(t, err.Error(), "back branch")
- require.Contains(t, err.Error(), "aligned")
+ // Again, if we were ever to Eval(), we would not know it's wrong. But we reject here "by luck"
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), "back branch target", "REJECT")
}
- pass, err = Eval(ops.Program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- require.False(t, pass)
- isNotPanic(t, err)
})
}
}
@@ -3219,25 +2909,19 @@ func TestBranchTooFar(t *testing.T) {
t.Parallel()
for v := uint64(1); v <= AssemblerMaxVersion; v++ {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
- ops, err := AssembleStringWithVersion(`int 1
+ ops := testProg(t, `int 1
bnz done
bytecblock 0x01234576 0xababcdcd 0xf000baad
done:
int 1`, v)
- require.NoError(t, err)
//t.Log(hex.EncodeToString(ops.Program))
canonicalProgramString := mutateProgVersion(v, "01200101224000112603040123457604ababcdcd04f000baad22")
canonicalProgramBytes, err := hex.DecodeString(canonicalProgramString)
require.NoError(t, err)
require.Equal(t, ops.Program, canonicalProgramBytes)
ops.Program[7] = 200 // clobber the branch offset to be beyond the end of the program
- err = Check(ops.Program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- require.True(t, strings.Contains(err.Error(), "beyond end of program"))
- pass, err := Eval(ops.Program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil),
+ "beyond end of program", "beyond end of program")
})
}
}
@@ -3248,12 +2932,11 @@ func TestBranchTooLarge(t *testing.T) {
t.Parallel()
for v := uint64(1); v <= AssemblerMaxVersion; v++ {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
- ops, err := AssembleStringWithVersion(`int 1
+ ops := testProg(t, `int 1
bnz done
bytecblock 0x01234576 0xababcdcd 0xf000baad
done:
int 1`, v)
- require.NoError(t, err)
//t.Log(hex.EncodeToString(ops.Program))
// (br)anch byte, (hi)gh byte of offset, (lo)w byte: brhilo
canonicalProgramString := mutateProgVersion(v, "01200101224000112603040123457604ababcdcd04f000baad22")
@@ -3261,14 +2944,7 @@ int 1`, v)
require.NoError(t, err)
require.Equal(t, ops.Program, canonicalProgramBytes)
ops.Program[6] = 0x70 // clobber hi byte of branch offset
- err = Check(ops.Program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- require.Contains(t, err.Error(), "beyond")
- pass, err := Eval(ops.Program, defaultEvalParams(nil, nil))
- require.Error(t, err)
- require.Contains(t, err.Error(), "beyond")
- require.False(t, pass)
- isNotPanic(t, err)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil), "beyond", "beyond")
})
}
branches := []string{
@@ -3281,7 +2957,6 @@ intc_0
done:
intc_1
`
- ep := defaultEvalParams(nil, nil)
for _, line := range branches {
t.Run(fmt.Sprintf("branch=%s", line), func(t *testing.T) {
source := fmt.Sprintf(template, line)
@@ -3289,13 +2964,8 @@ intc_1
require.NoError(t, err)
ops.Program[7] = 0xf0 // clobber the branch offset - highly negative
ops.Program[8] = 0xff // clobber the branch offset
- err = Check(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "beyond")
- pass, err := Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "beyond")
- require.False(t, pass)
+ testLogicBytes(t, ops.Program, defaultEvalParams(nil),
+ "branch target beyond", "branch target beyond")
})
}
}
@@ -3579,12 +3249,15 @@ int 142791994204213819
func evalLoop(b *testing.B, runs int, program []byte) {
b.ResetTimer()
for i := 0; i < runs; i++ {
- pass, err := Eval(program, benchmarkEvalParams(nil, nil))
+ var txn transactions.SignedTxn
+ txn.Lsig.Logic = program
+ pass, err := EvalSignature(0, benchmarkEvalParams(&txn))
if !pass {
// rerun to trace it. tracing messes up timing too much
- sb := strings.Builder{}
- pass, err = Eval(program, benchmarkEvalParams(&sb, nil))
- b.Log(sb.String())
+ ep := benchmarkEvalParams(&txn)
+ ep.Trace = &strings.Builder{}
+ pass, err = EvalSignature(0, ep)
+ b.Log(ep.Trace.String())
}
// require is super slow but makes useful error messages, wrap it in a check that makes the benchmark run a bunch faster
if err != nil {
@@ -3598,8 +3271,6 @@ func evalLoop(b *testing.B, runs int, program []byte) {
func benchmarkBasicProgram(b *testing.B, source string) {
ops := testProg(b, source, AssemblerMaxVersion)
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(b, err)
evalLoop(b, b.N, ops.Program)
}
@@ -3615,8 +3286,6 @@ func benchmarkOperation(b *testing.B, prefix string, operation string, suffix st
source := prefix + ";" + strings.Repeat(operation+";", 2000) + ";" + suffix
source = strings.ReplaceAll(source, ";", "\n")
ops := testProg(b, source, AssemblerMaxVersion)
- err := Check(ops.Program, defaultEvalParams(nil, nil))
- require.NoError(b, err)
evalLoop(b, runs, ops.Program)
b.ReportMetric(float64(inst)*15.0, "waste/op")
}
@@ -3690,6 +3359,7 @@ func BenchmarkBigMath(b *testing.B) {
{"b*", "", "byte 0x01234576; byte 0x0223627389; b*; pop", "int 1"},
{"b/", "", "byte 0x0123457673624736; byte 0x0223627389; b/; pop", "int 1"},
{"b%", "", "byte 0x0123457673624736; byte 0x0223627389; b/; pop", "int 1"},
+ {"bsqrt", "", "byte 0x0123457673624736; bsqrt; pop", "int 1"},
{"b+big", // u256 + u256
"byte 0x0123457601234576012345760123457601234576012345760123457601234576",
@@ -3711,6 +3381,10 @@ func BenchmarkBigMath(b *testing.B) {
`byte 0xa123457601234576012345760123457601234576012345760123457601234576
byte 0x34576012345760123457601234576312; b/; pop`,
"int 1"},
+ {"bsqrt-big", "",
+ `byte 0xa123457601234576012345760123457601234576012345760123457601234576
+ bsqrt; pop`,
+ "int 1"},
}
for _, bench := range benches {
b.Run(bench[0], func(b *testing.B) {
@@ -3792,16 +3466,17 @@ func BenchmarkCheckx5(b *testing.B) {
addBenchmark2Source,
}
- programs := make([]*OpStream, len(sourcePrograms))
- var err error
+ programs := make([][]byte, len(sourcePrograms))
for i, text := range sourcePrograms {
- programs[i], err = AssembleStringWithVersion(text, AssemblerMaxVersion)
- require.NoError(b, err)
+ ops := testProg(b, text, AssemblerMaxVersion)
+ programs[i] = ops.Program
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
for _, program := range programs {
- err = Check(program.Program, defaultEvalParams(nil, nil))
+ var txn transactions.SignedTxn
+ txn.Lsig.Logic = program
+ err := CheckSignature(0, defaultEvalParams(&txn))
if err != nil {
require.NoError(b, err)
}
@@ -3839,23 +3514,21 @@ pop
`
ops := testProg(t, text, AssemblerMaxVersion)
- ep := defaultEvalParams(nil, nil)
- ep.Txn = &transactions.SignedTxn{}
- ep.Txn.Txn.ApplicationArgs = [][]byte{[]byte("test")}
- _, err := Eval(ops.Program, ep)
- require.NoError(t, err)
+ var txn transactions.SignedTxn
+ txn.Lsig.Logic = ops.Program
+ txn.Txn.ApplicationArgs = [][]byte{[]byte("test")}
- ep = defaultEvalParamsV1(nil, nil)
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "greater than protocol supported version 1")
+ ep := defaultEvalParams(&txn)
+ testLogicBytes(t, ops.Program, ep)
+
+ ep = defaultEvalParamsWithVersion(&txn, 1)
+ testLogicBytes(t, ops.Program, ep,
+ "greater than protocol supported version 1", "greater than protocol supported version 1")
// hack the version and fail on illegal opcode
ops.Program[0] = 0x1
- ep = defaultEvalParamsV1(nil, nil)
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "illegal opcode 0x36") // txna
+ ep = defaultEvalParamsWithVersion(&txn, 1)
+ testLogicBytes(t, ops.Program, ep, "illegal opcode 0x36", "illegal opcode 0x36") // txna
}
func TestStackOverflow(t *testing.T) {
@@ -3956,35 +3629,19 @@ func TestApplicationsDisallowOldTeal(t *testing.T) {
partitiontest.PartitionTest(t)
const source = "int 1"
- ep := defaultEvalParams(nil, nil)
txn := makeSampleTxn()
txn.Txn.Type = protocol.ApplicationCallTx
txn.Txn.RekeyTo = basics.Address{}
- txngroup := []transactions.SignedTxn{txn}
- ep.TxnGroup = txngroup
+ ep := defaultEvalParams(&txn)
for v := uint64(0); v < appsEnabledVersion; v++ {
- ops, err := AssembleStringWithVersion(source, v)
- require.NoError(t, err)
-
- err = CheckStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), fmt.Sprintf("program version must be >= %d", appsEnabledVersion))
-
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), fmt.Sprintf("program version must be >= %d", appsEnabledVersion))
+ ops := testProg(t, source, v)
+ e := fmt.Sprintf("program version must be >= %d", appsEnabledVersion)
+ testAppBytes(t, ops.Program, ep, e, e)
}
- ops, err := AssembleStringWithVersion(source, appsEnabledVersion)
- require.NoError(t, err)
-
- err = CheckStateful(ops.Program, ep)
- require.NoError(t, err)
-
- _, err = EvalStateful(ops.Program, ep)
- require.NoError(t, err)
+ testApp(t, source, ep)
}
func TestAnyRekeyToOrApplicationRaisesMinTealVersion(t *testing.T) {
@@ -4028,53 +3685,26 @@ func TestAnyRekeyToOrApplicationRaisesMinTealVersion(t *testing.T) {
for ci, cse := range cases {
t.Run(fmt.Sprintf("ci=%d", ci), func(t *testing.T) {
- ep := defaultEvalParams(nil, nil)
- ep.TxnGroup = cse.group
- ep.Txn = &cse.group[0]
+ ep := defaultEvalParams(nil)
+ ep.TxnGroup = transactions.WrapSignedTxnsWithAD(cse.group)
// Computed MinTealVersion should be == validFromVersion
- calc := ComputeMinTealVersion(cse.group)
+ calc := ComputeMinTealVersion(ep.TxnGroup, false)
require.Equal(t, calc, cse.validFromVersion)
// Should fail for all versions < validFromVersion
expected := fmt.Sprintf("program version must be >= %d", cse.validFromVersion)
for v := uint64(0); v < cse.validFromVersion; v++ {
- ops, err := AssembleStringWithVersion(source, v)
- require.NoError(t, err)
-
- err = CheckStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), expected)
-
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), expected)
-
- err = Check(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), expected)
-
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), expected)
+ ops := testProg(t, source, v)
+ testAppBytes(t, ops.Program, ep, expected, expected)
+ testLogicBytes(t, ops.Program, ep, expected, expected)
}
// Should succeed for all versions >= validFromVersion
for v := cse.validFromVersion; v <= AssemblerMaxVersion; v++ {
- ops, err := AssembleStringWithVersion(source, v)
- require.NoError(t, err)
-
- err = CheckStateful(ops.Program, ep)
- require.NoError(t, err)
-
- _, err = EvalStateful(ops.Program, ep)
- require.NoError(t, err)
-
- err = Check(ops.Program, ep)
- require.NoError(t, err)
-
- _, err = Eval(ops.Program, ep)
- require.NoError(t, err)
+ ops := testProg(t, source, v)
+ testAppBytes(t, ops.Program, ep)
+ testLogicBytes(t, ops.Program, ep)
}
})
}
@@ -4119,7 +3749,7 @@ func TestAllowedOpcodesV2(t *testing.T) {
"gtxn": true,
}
- ep := defaultEvalParams(nil, nil)
+ ep := defaultEvalParams(nil)
cnt := 0
for _, spec := range OpSpecs {
@@ -4128,10 +3758,10 @@ func TestAllowedOpcodesV2(t *testing.T) {
require.True(t, ok, "Missed opcode in the test: %s", spec.Name)
require.Contains(t, source, spec.Name)
ops := testProg(t, source, AssemblerMaxVersion)
- // all opcodes allowed in stateful mode so use CheckStateful/EvalStateful
- err := CheckStateful(ops.Program, ep)
+ // all opcodes allowed in stateful mode so use CheckStateful/EvalContract
+ err := CheckContract(ops.Program, ep)
require.NoError(t, err, source)
- _, err = EvalStateful(ops.Program, ep)
+ _, err = EvalApp(ops.Program, 0, 0, ep)
if spec.Name != "return" {
// "return" opcode always succeeds so ignore it
require.Error(t, err, source)
@@ -4140,18 +3770,8 @@ func TestAllowedOpcodesV2(t *testing.T) {
for v := byte(0); v <= 1; v++ {
ops.Program[0] = v
- err = Check(ops.Program, ep)
- require.Error(t, err, source)
- require.Contains(t, err.Error(), "illegal opcode")
- err = CheckStateful(ops.Program, ep)
- require.Error(t, err, source)
- require.Contains(t, err.Error(), "illegal opcode")
- _, err = Eval(ops.Program, ep)
- require.Error(t, err, source)
- require.Contains(t, err.Error(), "illegal opcode")
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err, source)
- require.Contains(t, err.Error(), "illegal opcode")
+ testLogicBytes(t, ops.Program, ep, "illegal opcode", "illegal opcode")
+ testAppBytes(t, ops.Program, ep, "illegal opcode", "illegal opcode")
}
cnt++
}
@@ -4182,43 +3802,27 @@ func TestAllowedOpcodesV3(t *testing.T) {
"pushbytes": `pushbytes "stringsfail?"`,
}
- excluded := map[string]bool{}
-
- ep := defaultEvalParams(nil, nil)
+ ep := defaultEvalParams(nil)
cnt := 0
for _, spec := range OpSpecs {
- if spec.Version == 3 && !excluded[spec.Name] {
+ if spec.Version == 3 {
source, ok := tests[spec.Name]
require.True(t, ok, "Missed opcode in the test: %s", spec.Name)
require.Contains(t, source, spec.Name)
ops := testProg(t, source, AssemblerMaxVersion)
- // all opcodes allowed in stateful mode so use CheckStateful/EvalStateful
- err := CheckStateful(ops.Program, ep)
- require.NoError(t, err, source)
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err, source)
- require.NotContains(t, err.Error(), "illegal opcode")
+ // all opcodes allowed in stateful mode so use CheckStateful/EvalContract
+ testAppBytes(t, ops.Program, ep, "REJECT")
for v := byte(0); v <= 1; v++ {
ops.Program[0] = v
- err = Check(ops.Program, ep)
- require.Error(t, err, source)
- require.Contains(t, err.Error(), "illegal opcode")
- err = CheckStateful(ops.Program, ep)
- require.Error(t, err, source)
- require.Contains(t, err.Error(), "illegal opcode")
- _, err = Eval(ops.Program, ep)
- require.Error(t, err, source)
- require.Contains(t, err.Error(), "illegal opcode")
- _, err = EvalStateful(ops.Program, ep)
- require.Error(t, err, source)
- require.Contains(t, err.Error(), "illegal opcode")
+ testLogicBytes(t, ops.Program, ep, "illegal opcode", "illegal opcode")
+ testAppBytes(t, ops.Program, ep, "illegal opcode", "illegal opcode")
}
cnt++
}
}
- require.Equal(t, len(tests), cnt)
+ require.Len(t, tests, cnt)
}
func TestRekeyFailsOnOldVersion(t *testing.T) {
@@ -4227,23 +3831,12 @@ func TestRekeyFailsOnOldVersion(t *testing.T) {
t.Parallel()
for v := uint64(0); v < rekeyingEnabledVersion; v++ {
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
- ops, err := AssembleStringWithVersion(`int 1`, v)
- require.NoError(t, err)
+ ops := testProg(t, `int 1`, v)
var txn transactions.SignedTxn
- txn.Lsig.Logic = ops.Program
txn.Txn.RekeyTo = basics.Address{1, 2, 3, 4}
- sb := strings.Builder{}
- proto := defaultEvalProto()
- ep := defaultEvalParams(&sb, &txn)
- ep.TxnGroup = []transactions.SignedTxn{txn}
- ep.Proto = &proto
- err = Check(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), fmt.Sprintf("program version must be >= %d", rekeyingEnabledVersion))
- pass, err := Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), fmt.Sprintf("program version must be >= %d", rekeyingEnabledVersion))
- require.False(t, pass)
+ ep := defaultEvalParams(&txn)
+ e := fmt.Sprintf("program version must be >= %d", rekeyingEnabledVersion)
+ testLogicBytes(t, ops.Program, ep, e, e)
})
}
}
@@ -4268,7 +3861,7 @@ func testEvaluation(t *testing.T, program string, introduced uint64, tester eval
t.Run(fmt.Sprintf("v=%d", v), func(t *testing.T) {
t.Helper()
if v < introduced {
- testProg(t, obfuscate(program), v, expect{0, "...was introduced..."})
+ testProg(t, obfuscate(program), v, Expect{0, "...was introduced..."})
return
}
ops := testProg(t, program, v)
@@ -4277,21 +3870,20 @@ func testEvaluation(t *testing.T, program string, introduced uint64, tester eval
// EvalParams, so try all forward versions.
for lv := v; lv <= AssemblerMaxVersion; lv++ {
t.Run(fmt.Sprintf("lv=%d", lv), func(t *testing.T) {
- sb := strings.Builder{}
- err := Check(ops.Program, defaultEvalParamsWithVersion(&sb, nil, lv))
+ t.Helper()
+ var txn transactions.SignedTxn
+ txn.Lsig.Logic = ops.Program
+ ep := defaultEvalParamsWithVersion(&txn, lv)
+ err := CheckSignature(0, ep)
if err != nil {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ t.Log(ep.Trace.String())
}
require.NoError(t, err)
- var txn transactions.SignedTxn
- txn.Lsig.Logic = ops.Program
- sb = strings.Builder{}
- pass, err := Eval(ops.Program, defaultEvalParamsWithVersion(&sb, &txn, lv))
+ ep = defaultEvalParamsWithVersion(&txn, lv)
+ pass, err := EvalSignature(0, ep)
ok := tester(pass, err)
if !ok {
- t.Log(hex.EncodeToString(ops.Program))
- t.Log(sb.String())
+ t.Log(ep.Trace.String())
t.Log(err)
}
require.True(t, ok)
@@ -4403,6 +3995,8 @@ func TestBytes(t *testing.T) {
// it fails to copy).
testAccepts(t, `byte "john"; dup; int 2; int 105; setbyte; pop; byte "john"; ==`, 3)
testAccepts(t, `byte "jo"; byte "hn"; concat; dup; int 2; int 105; setbyte; pop; byte "john"; ==`, 3)
+
+ testAccepts(t, `byte "john"; byte "john"; ==`, 1)
}
func TestMethod(t *testing.T) {
@@ -4727,6 +4321,16 @@ func TestBytesMath(t *testing.T) {
// Even 128 byte outputs are ok
testAccepts(t, fmt.Sprintf("byte 0x%s; byte 0x%s; b*; len; int 128; ==", effs, effs), 4)
+
+ testAccepts(t, "byte 0x00; bsqrt; byte 0x; ==; return", 6)
+ testAccepts(t, "byte 0x01; bsqrt; byte 0x01; ==; return", 6)
+ testAccepts(t, "byte 0x10; bsqrt; byte 0x04; ==; return", 6)
+ testAccepts(t, "byte 0x11; bsqrt; byte 0x04; ==; return", 6)
+ testAccepts(t, "byte 0xffffff; bsqrt; len; int 2; ==; return", 6)
+ // 64 byte long inputs are accepted, even if they produce longer outputs
+ testAccepts(t, fmt.Sprintf("byte 0x%s; bsqrt; len; int 32; ==", effs), 6)
+ // 65 byte inputs are not ok.
+ testPanics(t, fmt.Sprintf("byte 0x%s00; bsqrt; pop; int 1", effs), 6)
}
func TestBytesCompare(t *testing.T) {
@@ -4799,17 +4403,12 @@ func TestLog(t *testing.T) {
partitiontest.PartitionTest(t)
t.Parallel()
- proto := defaultEvalProtoWithVersion(LogicVersion)
- txn := transactions.SignedTxn{
- Txn: transactions.Transaction{
- Type: protocol.ApplicationCallTx,
- },
- }
- ledger := logictest.MakeLedger(nil)
+ var txn transactions.SignedTxn
+ txn.Txn.Type = protocol.ApplicationCallTx
+ ledger := MakeLedger(nil)
ledger.NewApp(txn.Txn.Receiver, 0, basics.AppParams{})
- sb := strings.Builder{}
- ep := defaultEvalParams(&sb, &txn)
- ep.Proto = &proto
+ ep := defaultEvalParams(&txn)
+ ep.Proto = makeTestProtoV(LogicVersion)
ep.Ledger = ledger
testCases := []struct {
source string
@@ -4837,21 +4436,14 @@ func TestLog(t *testing.T) {
},
}
- //track expected number of logs in cx.Logs
+ //track expected number of logs in cx.EvalDelta.Logs
for i, s := range testCases {
- ops := testProg(t, s.source, AssemblerMaxVersion)
-
- err := CheckStateful(ops.Program, ep)
- require.NoError(t, err, s)
-
- pass, cx, err := EvalStatefulCx(ops.Program, ep)
- require.NoError(t, err)
- require.True(t, pass)
- require.Len(t, cx.Logs, s.loglen)
+ delta := testApp(t, s.source, ep)
+ require.Len(t, delta.Logs, s.loglen)
if i == len(testCases)-1 {
- require.Equal(t, strings.Repeat("a", MaxLogSize), cx.Logs[0])
+ require.Equal(t, strings.Repeat("a", MaxLogSize), delta.Logs[0])
} else {
- for _, l := range cx.Logs {
+ for _, l := range delta.Logs {
require.Equal(t, "a logging message", l)
}
}
@@ -4901,21 +4493,12 @@ func TestLog(t *testing.T) {
}
for _, c := range failCases {
- ops := testProg(t, c.source, AssemblerMaxVersion)
-
- err := CheckStateful(ops.Program, ep)
- require.NoError(t, err, c)
-
- var pass bool
switch c.runMode {
case runModeApplication:
- pass, err = EvalStateful(ops.Program, ep)
+ testApp(t, c.source, ep, c.errContains)
default:
- pass, err = Eval(ops.Program, ep)
-
+ testLogic(t, c.source, AssemblerMaxVersion, ep, c.errContains, c.errContains)
}
- require.Contains(t, err.Error(), c.errContains)
- require.False(t, pass)
}
}
@@ -4936,184 +4519,122 @@ func TestPcDetails(t *testing.T) {
for i, test := range tests {
t.Run(fmt.Sprintf("i=%d", i), func(t *testing.T) {
ops := testProg(t, test.source, LogicVersion)
- txn := makeSampleTxn()
- txgroup := makeSampleTxnGroup(txn)
- txn.Lsig.Logic = ops.Program
- sb := strings.Builder{}
- ep := defaultEvalParams(&sb, &txn)
- ep.TxnGroup = txgroup
-
- var cx EvalContext
- cx.EvalParams = ep
- cx.runModeFlags = runModeSignature
+ ep, _, _ := makeSampleEnv()
- pass, err := eval(ops.Program, &cx)
+ pass, cx, err := EvalContract(ops.Program, 0, 0, ep)
require.Error(t, err)
require.False(t, pass)
+ assert.Equal(t, test.pc, cx.pc, ep.Trace.String())
+
pc, det := cx.PcDetails()
- require.Equal(t, test.pc, pc)
- require.Equal(t, test.det, det)
+ assert.Equal(t, test.pc, pc)
+ assert.Equal(t, test.det, det)
})
}
}
var minB64DecodeVersion uint64 = 6
-type b64DecodeTestCase struct {
- Encoded string
- IsURL bool
- HasExtraNLs bool
- Decoded string
- Error error
-}
+func TestOpBase64Decode(t *testing.T) {
+ partitiontest.PartitionTest(t)
+ t.Parallel()
-var testCases = []b64DecodeTestCase{
- {"TU9CWS1ESUNLOwoKb3IsIFRIRSBXSEFMRS4KCgpCeSBIZXJtYW4gTWVsdmlsbGU=",
- false,
- false,
- `MOBY-DICK;
+ testCases := []struct {
+ encoded string
+ alph string
+ decoded string
+ error string
+ }{
+ {"TU9CWS1ESUNLOwoKb3IsIFRIRSBXSEFMRS4KCgpCeSBIZXJtYW4gTWVsdmlsbGU=",
+ "StdEncoding",
+ `MOBY-DICK;
or, THE WHALE.
-By Herman Melville`,
- nil,
- },
- {"TU9CWS1ESUNLOwoKb3IsIFRIRSBXSEFMRS4KCgpCeSBIZXJtYW4gTWVsdmlsbGU=",
- true,
- false,
- `MOBY-DICK;
+By Herman Melville`, "",
+ },
+ {"TU9CWS1ESUNLOwoKb3IsIFRIRSBXSEFMRS4KCgpCeSBIZXJtYW4gTWVsdmlsbGU=",
+ "URLEncoding",
+ `MOBY-DICK;
or, THE WHALE.
-By Herman Melville`,
- nil,
- },
- {"YWJjMTIzIT8kKiYoKSctPUB+", false, false, "abc123!?$*&()'-=@~", nil},
- {"YWJjMTIzIT8kKiYoKSctPUB-", true, false, "abc123!?$*&()'-=@~", nil},
- {"YWJjMTIzIT8kKiYoKSctPUB+", true, false, "", base64.CorruptInputError(23)},
- {"YWJjMTIzIT8kKiYoKSctPUB-", false, false, "", base64.CorruptInputError(23)},
-
- // try extra ='s and various whitespace:
- {"", false, false, "", nil},
- {"", true, false, "", nil},
- {"=", false, true, "", base64.CorruptInputError(0)},
- {"=", true, true, "", base64.CorruptInputError(0)},
- {" ", false, true, "", base64.CorruptInputError(0)},
- {" ", true, true, "", base64.CorruptInputError(0)},
- {"\t", false, true, "", base64.CorruptInputError(0)},
- {"\t", true, true, "", base64.CorruptInputError(0)},
- {"\r", false, true, "", nil},
- {"\r", true, true, "", nil},
- {"\n", false, true, "", nil},
- {"\n", true, true, "", nil},
-
- {"YWJjMTIzIT8kKiYoKSctPUB+\n", false, true, "abc123!?$*&()'-=@~", nil},
- {"YWJjMTIzIT8kKiYoKSctPUB-\n", true, true, "abc123!?$*&()'-=@~", nil},
- {"YWJjMTIzIT8kK\riYoKSctPUB+\n", false, true, "abc123!?$*&()'-=@~", nil},
- {"YWJjMTIzIT8kK\riYoKSctPUB-\n", true, true, "abc123!?$*&()'-=@~", nil},
- {"\n\rYWJjMTIzIT8\rkKiYoKSctPUB+\n", false, true, "abc123!?$*&()'-=@~", nil},
- {"\n\rYWJjMTIzIT8\rkKiYoKSctPUB-\n", true, true, "abc123!?$*&()'-=@~", nil},
-
- // padding and extra legal whitespace
- {"SQ==", false, false, "I", nil},
- {"SQ==", true, false, "I", nil},
- {"\rS\r\nQ=\n=\r\r\n", false, true, "I", nil},
- {"\rS\r\nQ=\n=\r\r\n", true, true, "I", nil},
-
- // Padding necessary? - Yes it is! And exactly the expected place and amount.
- {"SQ==", false, false, "I", nil},
- {"SQ==", true, false, "I", nil},
- {"S=Q=", false, false, "", base64.CorruptInputError(1)},
- {"S=Q=", true, false, "", base64.CorruptInputError(1)},
- {"=SQ=", false, false, "", base64.CorruptInputError(0)},
- {"=SQ=", true, false, "", base64.CorruptInputError(0)},
- {"SQ", false, false, "", base64.CorruptInputError(0)},
- {"SQ", true, false, "", base64.CorruptInputError(0)},
- {"SQ=", false, false, "", base64.CorruptInputError(3)},
- {"SQ=", true, false, "", base64.CorruptInputError(3)},
- {"SQ===", false, false, "", base64.CorruptInputError(4)},
- {"SQ===", true, false, "", base64.CorruptInputError(4)},
-}
-
-func TestBase64DecodeFunc(t *testing.T) {
- partitiontest.PartitionTest(t)
- t.Parallel()
-
- for _, testCase := range testCases {
- encoding := base64.StdEncoding
- if testCase.IsURL {
- encoding = base64.URLEncoding
- }
- // sanity check:
- if testCase.Error == nil && !testCase.HasExtraNLs {
- require.Equal(t, testCase.Encoded, encoding.EncodeToString([]byte(testCase.Decoded)))
- }
+By Herman Melville`, "",
+ },
- decoded, err := base64Decode([]byte(testCase.Encoded), encoding)
- require.Equal(t, testCase.Error, err, fmt.Sprintf("Error (%s): case decode [%s] -> [%s]", err, testCase.Encoded, testCase.Decoded))
- require.Equal(t, []byte(testCase.Decoded), decoded)
+ // Test that a string that doesn't need padding can't have it
+ {"cGFk", "StdEncoding", "pad", ""},
+ {"cGFk=", "StdEncoding", "pad", "input byte 4"},
+ {"cGFk==", "StdEncoding", "pad", "input byte 4"},
+ {"cGFk===", "StdEncoding", "pad", "input byte 4"},
+ // Ensures that even correct padding is illegal if not needed
+ {"cGFk====", "StdEncoding", "pad", "input byte 4"},
+
+ // Test that padding must be present to make len = 0 mod 4.
+ {"bm9wYWQ=", "StdEncoding", "nopad", ""},
+ {"bm9wYWQ", "StdEncoding", "nopad", "illegal"},
+ {"bm9wYWQ==", "StdEncoding", "nopad", "illegal"},
+
+ {"YWJjMTIzIT8kKiYoKSctPUB+", "StdEncoding", "abc123!?$*&()'-=@~", ""},
+ {"YWJjMTIzIT8kKiYoKSctPUB+", "StdEncoding", "abc123!?$*&()'-=@~", ""},
+ {"YWJjMTIzIT8kKiYoKSctPUB-", "URLEncoding", "abc123!?$*&()'-=@~", ""},
+ {"YWJjMTIzIT8kKiYoKSctPUB+", "URLEncoding", "", "input byte 23"},
+ {"YWJjMTIzIT8kKiYoKSctPUB-", "StdEncoding", "", "input byte 23"},
+
+ // try extra ='s and various whitespace:
+ {"", "StdEncoding", "", ""},
+ {"", "URLEncoding", "", ""},
+ {"=", "StdEncoding", "", "byte 0"},
+ {"=", "URLEncoding", "", "byte 0"},
+ {" ", "StdEncoding", "", "byte 0"},
+ {" ", "URLEncoding", "", "byte 0"},
+ {"\t", "StdEncoding", "", "byte 0"},
+ {"\t", "URLEncoding", "", "byte 0"},
+ {"\r", "StdEncoding", "", ""},
+ {"\r", "URLEncoding", "", ""},
+ {"\n", "StdEncoding", "", ""},
+ {"\n", "URLEncoding", "", ""},
+
+ {"YWJjMTIzIT8kKiYoKSctPUB+\n", "StdEncoding", "abc123!?$*&()'-=@~", ""},
+ {"YWJjMTIzIT8kKiYoKSctPUB-\n", "URLEncoding", "abc123!?$*&()'-=@~", ""},
+ {"YWJjMTIzIT8kK\riYoKSctPUB+\n", "StdEncoding", "abc123!?$*&()'-=@~", ""},
+ {"YWJjMTIzIT8kK\riYoKSctPUB-\n", "URLEncoding", "abc123!?$*&()'-=@~", ""},
+ {"\n\rYWJjMTIzIT8\rkKiYoKSctPUB+\n", "StdEncoding", "abc123!?$*&()'-=@~", ""},
+ {"\n\rYWJjMTIzIT8\rkKiYoKSctPUB-\n", "URLEncoding", "abc123!?$*&()'-=@~", ""},
+
+ // padding and extra legal whitespace
+ {"SQ==", "StdEncoding", "I", ""},
+ {"SQ==", "URLEncoding", "I", ""},
+ {"\rS\r\nQ=\n=\r\r\n", "StdEncoding", "I", ""},
+ {"\rS\r\nQ=\n=\r\r\n", "URLEncoding", "I", ""},
+
+ // Padding necessary? - Yes it is! And exactly the expected place and amount.
+ {"SQ==", "StdEncoding", "I", ""},
+ {"SQ==", "URLEncoding", "I", ""},
+ {"S=Q=", "StdEncoding", "", "byte 1"},
+ {"S=Q=", "URLEncoding", "", "byte 1"},
+ {"=SQ=", "StdEncoding", "", "byte 0"},
+ {"=SQ=", "URLEncoding", "", "byte 0"},
+ {"SQ", "StdEncoding", "", "byte 0"},
+ {"SQ", "URLEncoding", "", "byte 0"},
+ {"SQ=", "StdEncoding", "", "byte 3"},
+ {"SQ=", "URLEncoding", "", "byte 3"},
+ {"SQ===", "StdEncoding", "", "byte 4"},
+ {"SQ===", "URLEncoding", "", "byte 4"},
}
-}
-
-type b64DecodeTestArgs struct {
- Raw []byte
- Encoded []byte
- IsURL bool
- Program []byte
-}
-func b64TestDecodeAssembleWithArgs(t *testing.T) []b64DecodeTestArgs {
- sourceTmpl := `#pragma version %d
-arg 0
-arg 1
-base64_decode %s
-==`
- args := []b64DecodeTestArgs{}
- for _, testCase := range testCases {
- if testCase.Error == nil {
- field := "StdEncoding"
- if testCase.IsURL {
- field = "URLEncoding"
- }
- source := fmt.Sprintf(sourceTmpl, minB64DecodeVersion, field)
- ops, err := AssembleStringWithVersion(source, minB64DecodeVersion)
- require.NoError(t, err)
+ template := `byte 0x%s; byte 0x%s; base64_decode %s; ==`
+ for _, tc := range testCases {
+ source := fmt.Sprintf(template, hex.EncodeToString([]byte(tc.decoded)), hex.EncodeToString([]byte(tc.encoded)), tc.alph)
- arg := b64DecodeTestArgs{
- Raw: []byte(testCase.Decoded),
- Encoded: []byte(testCase.Encoded),
- IsURL: testCase.IsURL,
- Program: ops.Program,
- }
- args = append(args, arg)
- }
- }
- return args
-}
-
-func b64TestDecodeEval(tb testing.TB, args []b64DecodeTestArgs) {
- for _, data := range args {
- var txn transactions.SignedTxn
- txn.Lsig.Logic = data.Program
- txn.Lsig.Args = [][]byte{data.Raw[:], data.Encoded[:]}
- ep := defaultEvalParams(&strings.Builder{}, &txn)
- pass, err := Eval(data.Program, ep)
- if err != nil {
- require.NoError(tb, err)
- }
- if !pass {
- fmt.Printf("FAILING WITH data = %#v", data)
- require.True(tb, pass)
+ if tc.error == "" {
+ testAccepts(t, source, minB64DecodeVersion)
+ } else {
+ err := testPanics(t, source, minB64DecodeVersion)
+ require.Contains(t, err.Error(), tc.error)
}
}
}
-
-func TestOpBase64Decode(t *testing.T) {
- partitiontest.PartitionTest(t)
- t.Parallel()
- args := b64TestDecodeAssembleWithArgs(t)
- b64TestDecodeEval(t, args)
-}
diff --git a/data/transactions/logic/export_test.go b/data/transactions/logic/export_test.go
new file mode 100644
index 000000000..d0ca904b2
--- /dev/null
+++ b/data/transactions/logic/export_test.go
@@ -0,0 +1,44 @@
+// Copyright (C) 2019-2022 Algorand, Inc.
+// This file is part of go-algorand
+//
+// go-algorand is free software: you can redistribute it and/or modify
+// it under the terms of the GNU Affero General Public License as
+// published by the Free Software Foundation, either version 3 of the
+// License, or (at your option) any later version.
+//
+// go-algorand is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU Affero General Public License for more details.
+//
+// You should have received a copy of the GNU Affero General Public License
+// along with go-algorand. If not, see <https://www.gnu.org/licenses/>.
+
+package logic
+
+// Export for testing only. See
+// https://medium.com/@robiplus/golang-trick-export-for-test-aa16cbd7b8cd for a
+// nice explanation.
+
+func NewExpect(l int, s string) Expect {
+ return Expect{l, s}
+}
+
+func (ep *EvalParams) Reset() {
+ ep.reset()
+}
+
+var MakeSampleEnv = makeSampleEnv
+var MakeSampleEnvWithVersion = makeSampleEnvWithVersion
+var MakeSampleTxn = makeSampleTxn
+var MakeSampleTxnGroup = makeSampleTxnGroup
+var MakeTestProto = makeTestProto
+var MakeTestProtoV = makeTestProtoV
+var Obfuscate = obfuscate
+var TestApp = testApp
+var TestAppBytes = testAppBytes
+var TestApps = testApps
+var TestProg = testProg
+
+const InnerAppsEnabledVersion = innerAppsEnabledVersion
+const CreatedResourcesVersion = createdResourcesVersion
diff --git a/data/transactions/logic/fields.go b/data/transactions/logic/fields.go
index 8d4d4a839..a1c944ffb 100644
--- a/data/transactions/logic/fields.go
+++ b/data/transactions/logic/fields.go
@@ -17,13 +17,11 @@
package logic
import (
- "fmt"
-
"github.com/algorand/go-algorand/data/transactions"
"github.com/algorand/go-algorand/protocol"
)
-//go:generate stringer -type=TxnField,GlobalField,AssetParamsField,AppParamsField,AssetHoldingField,OnCompletionConstType,EcdsaCurve,Base64Encoding -output=fields_string.go
+//go:generate stringer -type=TxnField,GlobalField,AssetParamsField,AppParamsField,AcctParamsField,AssetHoldingField,OnCompletionConstType,EcdsaCurve,Base64Encoding -output=fields_string.go
// TxnField is an enum type for `txn` and `gtxn`
type TxnField int
@@ -164,119 +162,135 @@ const (
invalidTxnField // fence for some setup that loops from Sender..invalidTxnField
)
+// FieldSpec unifies the various specs for presentation
+type FieldSpec interface {
+ Type() StackType
+ OpVersion() uint64
+ Note() string
+ Version() uint64
+}
+
// TxnFieldNames are arguments to the 'txn' and 'txnById' opcodes
var TxnFieldNames []string
-// TxnFieldTypes is StackBytes or StackUint64 parallel to TxnFieldNames
-var TxnFieldTypes []StackType
-
var txnFieldSpecByField map[TxnField]txnFieldSpec
-var txnFieldSpecByName tfNameSpecMap
+
+// TxnFieldSpecByName gives access to the field specs by field name
+var TxnFieldSpecByName tfNameSpecMap
// simple interface used by doc generator for fields versioning
type tfNameSpecMap map[string]txnFieldSpec
-func (s tfNameSpecMap) getExtraFor(name string) (extra string) {
- if s[name].version > 1 {
- extra = fmt.Sprintf("LogicSigVersion >= %d.", s[name].version)
- }
- return
+func (s tfNameSpecMap) SpecByName(name string) FieldSpec {
+ fs := s[name]
+ return &fs
}
type txnFieldSpec struct {
field TxnField
ftype StackType
+ array bool // Is this an array field?
version uint64 // When this field become available to txn/gtxn. 0=always
itxVersion uint64 // When this field become available to itxn_field. 0=never
effects bool // Is this a field on the "effects"? That is, something in ApplyData
}
-var txnFieldSpecs = []txnFieldSpec{
- {Sender, StackBytes, 0, 5, false},
- {Fee, StackUint64, 0, 5, false},
- {FirstValid, StackUint64, 0, 0, false},
- {FirstValidTime, StackUint64, 0, 0, false},
- {LastValid, StackUint64, 0, 0, false},
- {Note, StackBytes, 0, 6, false},
- {Lease, StackBytes, 0, 0, false},
- {Receiver, StackBytes, 0, 5, false},
- {Amount, StackUint64, 0, 5, false},
- {CloseRemainderTo, StackBytes, 0, 5, false},
- {VotePK, StackBytes, 0, 6, false},
- {SelectionPK, StackBytes, 0, 6, false},
- {VoteFirst, StackUint64, 0, 6, false},
- {VoteLast, StackUint64, 0, 6, false},
- {VoteKeyDilution, StackUint64, 0, 6, false},
- {Type, StackBytes, 0, 5, false},
- {TypeEnum, StackUint64, 0, 5, false},
- {XferAsset, StackUint64, 0, 5, false},
- {AssetAmount, StackUint64, 0, 5, false},
- {AssetSender, StackBytes, 0, 5, false},
- {AssetReceiver, StackBytes, 0, 5, false},
- {AssetCloseTo, StackBytes, 0, 5, false},
- {GroupIndex, StackUint64, 0, 0, false},
- {TxID, StackBytes, 0, 0, false},
- {ApplicationID, StackUint64, 2, 0, false},
- {OnCompletion, StackUint64, 2, 0, false},
- {ApplicationArgs, StackBytes, 2, 0, false},
- {NumAppArgs, StackUint64, 2, 0, false},
- {Accounts, StackBytes, 2, 0, false},
- {NumAccounts, StackUint64, 2, 0, false},
- {ApprovalProgram, StackBytes, 2, 0, false},
- {ClearStateProgram, StackBytes, 2, 0, false},
- {RekeyTo, StackBytes, 2, 6, false},
- {ConfigAsset, StackUint64, 2, 5, false},
- {ConfigAssetTotal, StackUint64, 2, 5, false},
- {ConfigAssetDecimals, StackUint64, 2, 5, false},
- {ConfigAssetDefaultFrozen, StackUint64, 2, 5, false},
- {ConfigAssetUnitName, StackBytes, 2, 5, false},
- {ConfigAssetName, StackBytes, 2, 5, false},
- {ConfigAssetURL, StackBytes, 2, 5, false},
- {ConfigAssetMetadataHash, StackBytes, 2, 5, false},
- {ConfigAssetManager, StackBytes, 2, 5, false},
- {ConfigAssetReserve, StackBytes, 2, 5, false},
- {ConfigAssetFreeze, StackBytes, 2, 5, false},
- {ConfigAssetClawback, StackBytes, 2, 5, false},
- {FreezeAsset, StackUint64, 2, 5, false},
- {FreezeAssetAccount, StackBytes, 2, 5, false},
- {FreezeAssetFrozen, StackUint64, 2, 5, false},
- {Assets, StackUint64, 3, 0, false},
- {NumAssets, StackUint64, 3, 0, false},
- {Applications, StackUint64, 3, 0, false},
- {NumApplications, StackUint64, 3, 0, false},
- {GlobalNumUint, StackUint64, 3, 0, false},
- {GlobalNumByteSlice, StackUint64, 3, 0, false},
- {LocalNumUint, StackUint64, 3, 0, false},
- {LocalNumByteSlice, StackUint64, 3, 0, false},
- {ExtraProgramPages, StackUint64, 4, 0, false},
- {Nonparticipation, StackUint64, 5, 6, false},
-
- {Logs, StackBytes, 5, 5, true},
- {NumLogs, StackUint64, 5, 5, true},
- {CreatedAssetID, StackUint64, 5, 5, true},
- {CreatedApplicationID, StackUint64, 5, 5, true},
+func (fs *txnFieldSpec) Type() StackType {
+ return fs.ftype
}
-// TxnaFieldNames are arguments to the 'txna' opcode
-// It is a subset of txn transaction fields so initialized here in-place
-var TxnaFieldNames = []string{ApplicationArgs.String(), Accounts.String(), Assets.String(), Applications.String(), Logs.String()}
+func (fs *txnFieldSpec) OpVersion() uint64 {
+ return 0
+}
-// TxnaFieldTypes is StackBytes or StackUint64 parallel to TxnaFieldNames
-var TxnaFieldTypes = []StackType{
- txnaFieldSpecByField[ApplicationArgs].ftype,
- txnaFieldSpecByField[Accounts].ftype,
- txnaFieldSpecByField[Assets].ftype,
- txnaFieldSpecByField[Applications].ftype,
- txnaFieldSpecByField[Logs].ftype,
+func (fs *txnFieldSpec) Version() uint64 {
+ return fs.version
}
-var txnaFieldSpecByField = map[TxnField]txnFieldSpec{
- ApplicationArgs: {ApplicationArgs, StackBytes, 2, 0, false},
- Accounts: {Accounts, StackBytes, 2, 0, false},
- Assets: {Assets, StackUint64, 3, 0, false},
- Applications: {Applications, StackUint64, 3, 0, false},
+func (fs *txnFieldSpec) Note() string {
+ note := txnFieldDocs[fs.field.String()]
+ if fs.effects {
+ note = addExtra(note, "Application mode only")
+ }
+ return note
+}
- Logs: {Logs, StackBytes, 5, 5, true},
+var txnFieldSpecs = []txnFieldSpec{
+ {Sender, StackBytes, false, 0, 5, false},
+ {Fee, StackUint64, false, 0, 5, false},
+ {FirstValid, StackUint64, false, 0, 0, false},
+ {FirstValidTime, StackUint64, false, 0, 0, false},
+ {LastValid, StackUint64, false, 0, 0, false},
+ {Note, StackBytes, false, 0, 6, false},
+ {Lease, StackBytes, false, 0, 0, false},
+ {Receiver, StackBytes, false, 0, 5, false},
+ {Amount, StackUint64, false, 0, 5, false},
+ {CloseRemainderTo, StackBytes, false, 0, 5, false},
+ {VotePK, StackBytes, false, 0, 6, false},
+ {SelectionPK, StackBytes, false, 0, 6, false},
+ {VoteFirst, StackUint64, false, 0, 6, false},
+ {VoteLast, StackUint64, false, 0, 6, false},
+ {VoteKeyDilution, StackUint64, false, 0, 6, false},
+ {Type, StackBytes, false, 0, 5, false},
+ {TypeEnum, StackUint64, false, 0, 5, false},
+ {XferAsset, StackUint64, false, 0, 5, false},
+ {AssetAmount, StackUint64, false, 0, 5, false},
+ {AssetSender, StackBytes, false, 0, 5, false},
+ {AssetReceiver, StackBytes, false, 0, 5, false},
+ {AssetCloseTo, StackBytes, false, 0, 5, false},
+ {GroupIndex, StackUint64, false, 0, 0, false},
+ {TxID, StackBytes, false, 0, 0, false},
+ {ApplicationID, StackUint64, false, 2, 6, false},
+ {OnCompletion, StackUint64, false, 2, 6, false},
+ {ApplicationArgs, StackBytes, true, 2, 6, false},
+ {NumAppArgs, StackUint64, false, 2, 0, false},
+ {Accounts, StackBytes, true, 2, 6, false},
+ {NumAccounts, StackUint64, false, 2, 0, false},
+ {ApprovalProgram, StackBytes, false, 2, 6, false},
+ {ClearStateProgram, StackBytes, false, 2, 6, false},
+ {RekeyTo, StackBytes, false, 2, 6, false},
+ {ConfigAsset, StackUint64, false, 2, 5, false},
+ {ConfigAssetTotal, StackUint64, false, 2, 5, false},
+ {ConfigAssetDecimals, StackUint64, false, 2, 5, false},
+ {ConfigAssetDefaultFrozen, StackUint64, false, 2, 5, false},
+ {ConfigAssetUnitName, StackBytes, false, 2, 5, false},
+ {ConfigAssetName, StackBytes, false, 2, 5, false},
+ {ConfigAssetURL, StackBytes, false, 2, 5, false},
+ {ConfigAssetMetadataHash, StackBytes, false, 2, 5, false},
+ {ConfigAssetManager, StackBytes, false, 2, 5, false},
+ {ConfigAssetReserve, StackBytes, false, 2, 5, false},
+ {ConfigAssetFreeze, StackBytes, false, 2, 5, false},
+ {ConfigAssetClawback, StackBytes, false, 2, 5, false},
+ {FreezeAsset, StackUint64, false, 2, 5, false},
+ {FreezeAssetAccount, StackBytes, false, 2, 5, false},
+ {FreezeAssetFrozen, StackUint64, false, 2, 5, false},
+ {Assets, StackUint64, true, 3, 6, false},
+ {NumAssets, StackUint64, false, 3, 0, false},
+ {Applications, StackUint64, true, 3, 6, false},
+ {NumApplications, StackUint64, false, 3, 0, false},
+ {GlobalNumUint, StackUint64, false, 3, 6, false},
+ {GlobalNumByteSlice, StackUint64, false, 3, 6, false},
+ {LocalNumUint, StackUint64, false, 3, 6, false},
+ {LocalNumByteSlice, StackUint64, false, 3, 6, false},
+ {ExtraProgramPages, StackUint64, false, 4, 6, false},
+ {Nonparticipation, StackUint64, false, 5, 6, false},
+
+ {Logs, StackBytes, true, 5, 5, true},
+ {NumLogs, StackUint64, false, 5, 5, true},
+ {CreatedAssetID, StackUint64, false, 5, 5, true},
+ {CreatedApplicationID, StackUint64, false, 5, 5, true},
+}
+
+// TxnaFieldNames are arguments to the 'txna' opcode
+// It need not be fast, as it's only used for doc generation.
+func TxnaFieldNames() []string {
+ var names []string
+ for _, fs := range txnFieldSpecs {
+ if fs.array {
+ names = append(names, fs.field.String())
+ }
+ }
+ return names
}
var innerTxnTypes = map[string]uint64{
@@ -285,6 +299,7 @@ var innerTxnTypes = map[string]uint64{
string(protocol.AssetTransferTx): 5,
string(protocol.AssetConfigTx): 5,
string(protocol.AssetFreezeTx): 5,
+ string(protocol.ApplicationCallTx): 6,
}
// TxnTypeNames is the values of Txn.Type in enum order
@@ -368,15 +383,23 @@ const (
// GroupID [32]byte
GroupID
+ // v6
+
+ // OpcodeBudget The remaining budget available for execution
+ OpcodeBudget
+
+ // CallerApplicationID The ID of the caller app, else 0
+ CallerApplicationID
+
+ // CallerApplicationAddress The Address of the caller app, else ZeroAddress
+ CallerApplicationAddress
+
invalidGlobalField
)
// GlobalFieldNames are arguments to the 'global' opcode
var GlobalFieldNames []string
-// GlobalFieldTypes is StackUint64 StackBytes in parallel with GlobalFieldNames
-var GlobalFieldTypes []StackType
-
type globalFieldSpec struct {
field GlobalField
ftype StackType
@@ -384,6 +407,26 @@ type globalFieldSpec struct {
version uint64
}
+func (fs *globalFieldSpec) Type() StackType {
+ return fs.ftype
+}
+
+func (fs *globalFieldSpec) OpVersion() uint64 {
+ return 0
+}
+
+func (fs *globalFieldSpec) Version() uint64 {
+ return fs.version
+}
+func (fs *globalFieldSpec) Note() string {
+ note := globalFieldDocs[fs.field.String()]
+ if fs.mode == runModeApplication {
+ note = addExtra(note, "Application mode only.")
+ }
+ // There are no Signature mode only globals
+ return note
+}
+
var globalFieldSpecs = []globalFieldSpec{
{MinTxnFee, StackUint64, modeAny, 0}, // version 0 is the same as TEAL v1 (initial TEAL release)
{MinBalance, StackUint64, modeAny, 0},
@@ -397,20 +440,21 @@ var globalFieldSpecs = []globalFieldSpec{
{CreatorAddress, StackBytes, runModeApplication, 3},
{CurrentApplicationAddress, StackBytes, runModeApplication, 5},
{GroupID, StackBytes, modeAny, 5},
+ {OpcodeBudget, StackUint64, modeAny, 6},
+ {CallerApplicationID, StackUint64, runModeApplication, 6},
+ {CallerApplicationAddress, StackBytes, runModeApplication, 6},
}
-// GlobalFieldSpecByField maps GlobalField to spec
var globalFieldSpecByField map[GlobalField]globalFieldSpec
-var globalFieldSpecByName gfNameSpecMap
-// simple interface used by doc generator for fields versioning
+// GlobalFieldSpecByName gives access to the field specs by field name
+var GlobalFieldSpecByName gfNameSpecMap
+
type gfNameSpecMap map[string]globalFieldSpec
-func (s gfNameSpecMap) getExtraFor(name string) (extra string) {
- if s[name].version > 1 {
- extra = fmt.Sprintf("LogicSigVersion >= %d.", s[name].version)
- }
- return
+func (s gfNameSpecMap) SpecByName(name string) FieldSpec {
+ fs := s[name]
+ return &fs
}
// EcdsaCurve is an enum for `ecdsa_` opcodes
@@ -430,22 +474,38 @@ type ecdsaCurveSpec struct {
version uint64
}
+func (fs *ecdsaCurveSpec) Type() StackType {
+ return StackNone // Will not show, since all are the same
+}
+
+func (fs *ecdsaCurveSpec) OpVersion() uint64 {
+ return 5
+}
+
+func (fs *ecdsaCurveSpec) Version() uint64 {
+ return fs.version
+}
+
+func (fs *ecdsaCurveSpec) Note() string {
+ note := EcdsaCurveDocs[fs.field.String()]
+ return note
+}
+
var ecdsaCurveSpecs = []ecdsaCurveSpec{
{Secp256k1, 5},
}
var ecdsaCurveSpecByField map[EcdsaCurve]ecdsaCurveSpec
-var ecdsaCurveSpecByName ecDsaCurveNameSpecMap
+
+// EcdsaCurveSpecByName gives access to the field specs by field name
+var EcdsaCurveSpecByName ecDsaCurveNameSpecMap
// simple interface used by doc generator for fields versioning
type ecDsaCurveNameSpecMap map[string]ecdsaCurveSpec
-func (s ecDsaCurveNameSpecMap) getExtraFor(name string) (extra string) {
- // Uses 5 here because ecdsa fields were introduced in 5
- if s[name].version > 5 {
- extra = fmt.Sprintf("LogicSigVersion >= %d.", s[name].version)
- }
- return
+func (s ecDsaCurveNameSpecMap) SpecByName(name string) FieldSpec {
+ fs := s[name]
+ return &fs
}
// Base64Encoding is an enum for the `base64decode` opcode
@@ -478,11 +538,24 @@ var base64EncodingSpecByName base64EncodingSpecMap
type base64EncodingSpecMap map[string]base64EncodingSpec
+func (fs *base64EncodingSpec) Type() StackType {
+ return fs.ftype
+}
+
+func (fs *base64EncodingSpec) OpVersion() uint64 {
+ return 6
+}
+
+func (fs *base64EncodingSpec) Version() uint64 {
+ return fs.version
+}
+
+func (fs *base64EncodingSpec) Note() string {
+ note := "" // no doc list?
+ return note
+}
func (s base64EncodingSpecMap) getExtraFor(name string) (extra string) {
// Uses 6 here because base64_decode fields were introduced in 6
- if s[name].version > 6 {
- extra = fmt.Sprintf("LogicSigVersion >= %d.", s[name].version)
- }
return
}
@@ -500,32 +573,44 @@ const (
// AssetHoldingFieldNames are arguments to the 'asset_holding_get' opcode
var AssetHoldingFieldNames []string
-// AssetHoldingFieldTypes is StackUint64 StackBytes in parallel with AssetHoldingFieldNames
-var AssetHoldingFieldTypes []StackType
-
type assetHoldingFieldSpec struct {
field AssetHoldingField
ftype StackType
version uint64
}
+func (fs *assetHoldingFieldSpec) Type() StackType {
+ return fs.ftype
+}
+
+func (fs *assetHoldingFieldSpec) OpVersion() uint64 {
+ return 2
+}
+
+func (fs *assetHoldingFieldSpec) Version() uint64 {
+ return fs.version
+}
+
+func (fs *assetHoldingFieldSpec) Note() string {
+ note := assetHoldingFieldDocs[fs.field.String()]
+ return note
+}
+
var assetHoldingFieldSpecs = []assetHoldingFieldSpec{
{AssetBalance, StackUint64, 2},
{AssetFrozen, StackUint64, 2},
}
var assetHoldingFieldSpecByField map[AssetHoldingField]assetHoldingFieldSpec
-var assetHoldingFieldSpecByName ahfNameSpecMap
-// simple interface used by doc generator for fields versioning
+// AssetHoldingFieldSpecByName gives access to the field specs by field name
+var AssetHoldingFieldSpecByName ahfNameSpecMap
+
type ahfNameSpecMap map[string]assetHoldingFieldSpec
-func (s ahfNameSpecMap) getExtraFor(name string) (extra string) {
- // Uses 2 here because asset fields were introduced in 2
- if s[name].version > 2 {
- extra = fmt.Sprintf("LogicSigVersion >= %d.", s[name].version)
- }
- return
+func (s ahfNameSpecMap) SpecByName(name string) FieldSpec {
+ fs := s[name]
+ return &fs
}
// AssetParamsField is an enum for `asset_params_get` opcode
@@ -564,15 +649,29 @@ const (
// AssetParamsFieldNames are arguments to the 'asset_params_get' opcode
var AssetParamsFieldNames []string
-// AssetParamsFieldTypes is StackUint64 StackBytes in parallel with AssetParamsFieldNames
-var AssetParamsFieldTypes []StackType
-
type assetParamsFieldSpec struct {
field AssetParamsField
ftype StackType
version uint64
}
+func (fs *assetParamsFieldSpec) Type() StackType {
+ return fs.ftype
+}
+
+func (fs *assetParamsFieldSpec) OpVersion() uint64 {
+ return 2
+}
+
+func (fs *assetParamsFieldSpec) Version() uint64 {
+ return fs.version
+}
+
+func (fs *assetParamsFieldSpec) Note() string {
+ note := assetParamsFieldDocs[fs.field.String()]
+ return note
+}
+
var assetParamsFieldSpecs = []assetParamsFieldSpec{
{AssetTotal, StackUint64, 2},
{AssetDecimals, StackUint64, 2},
@@ -589,17 +688,15 @@ var assetParamsFieldSpecs = []assetParamsFieldSpec{
}
var assetParamsFieldSpecByField map[AssetParamsField]assetParamsFieldSpec
-var assetParamsFieldSpecByName apfNameSpecMap
-// simple interface used by doc generator for fields versioning
+// AssetParamsFieldSpecByName gives access to the field specs by field name
+var AssetParamsFieldSpecByName apfNameSpecMap
+
type apfNameSpecMap map[string]assetParamsFieldSpec
-func (s apfNameSpecMap) getExtraFor(name string) (extra string) {
- // Uses 2 here because asset fields were introduced in 2
- if s[name].version > 2 {
- extra = fmt.Sprintf("LogicSigVersion >= %d.", s[name].version)
- }
- return
+func (s apfNameSpecMap) SpecByName(name string) FieldSpec {
+ fs := s[name]
+ return &fs
}
// AppParamsField is an enum for `app_params_get` opcode
@@ -633,15 +730,29 @@ const (
// AppParamsFieldNames are arguments to the 'app_params_get' opcode
var AppParamsFieldNames []string
-// AppParamsFieldTypes is StackUint64 StackBytes in parallel with AppParamsFieldNames
-var AppParamsFieldTypes []StackType
-
type appParamsFieldSpec struct {
field AppParamsField
ftype StackType
version uint64
}
+func (fs *appParamsFieldSpec) Type() StackType {
+ return fs.ftype
+}
+
+func (fs *appParamsFieldSpec) OpVersion() uint64 {
+ return 5
+}
+
+func (fs *appParamsFieldSpec) Version() uint64 {
+ return fs.version
+}
+
+func (fs *appParamsFieldSpec) Note() string {
+ note := appParamsFieldDocs[fs.field.String()]
+ return note
+}
+
var appParamsFieldSpecs = []appParamsFieldSpec{
{AppApprovalProgram, StackBytes, 5},
{AppClearStateProgram, StackBytes, 5},
@@ -655,17 +766,75 @@ var appParamsFieldSpecs = []appParamsFieldSpec{
}
var appParamsFieldSpecByField map[AppParamsField]appParamsFieldSpec
-var appParamsFieldSpecByName appNameSpecMap
+
+// AppParamsFieldSpecByName gives access to the field specs by field name
+var AppParamsFieldSpecByName appNameSpecMap
// simple interface used by doc generator for fields versioning
type appNameSpecMap map[string]appParamsFieldSpec
-func (s appNameSpecMap) getExtraFor(name string) (extra string) {
- // Uses 5 here because app fields were introduced in 5
- if s[name].version > 5 {
- extra = fmt.Sprintf("LogicSigVersion >= %d.", s[name].version)
- }
- return
+func (s appNameSpecMap) SpecByName(name string) FieldSpec {
+ fs := s[name]
+ return &fs
+}
+
+// AcctParamsField is an enum for `acct_params_get` opcode
+type AcctParamsField int
+
+const (
+ // AcctBalance is the blance, with pending rewards
+ AcctBalance AcctParamsField = iota
+ // AcctMinBalance is algos needed for this accounts apps and assets
+ AcctMinBalance
+ //AcctAuthAddr is the rekeyed address if any, else ZeroAddress
+ AcctAuthAddr
+
+ invalidAcctParamsField
+)
+
+// AcctParamsFieldNames are arguments to the 'acct_params_get' opcode
+var AcctParamsFieldNames []string
+
+type acctParamsFieldSpec struct {
+ field AcctParamsField
+ ftype StackType
+ version uint64
+}
+
+func (fs *acctParamsFieldSpec) Type() StackType {
+ return fs.ftype
+}
+
+func (fs *acctParamsFieldSpec) OpVersion() uint64 {
+ return 6
+}
+
+func (fs *acctParamsFieldSpec) Version() uint64 {
+ return fs.version
+}
+
+func (fs *acctParamsFieldSpec) Note() string {
+ note := acctParamsFieldDocs[fs.field.String()]
+ return note
+}
+
+var acctParamsFieldSpecs = []acctParamsFieldSpec{
+ {AcctBalance, StackUint64, 6},
+ {AcctMinBalance, StackUint64, 6},
+ {AcctAuthAddr, StackBytes, 6},
+}
+
+var acctParamsFieldSpecByField map[AcctParamsField]acctParamsFieldSpec
+
+// AcctParamsFieldSpecByName gives access to the field specs by field name
+var AcctParamsFieldSpecByName acctNameSpecMap
+
+// simple interface used by doc generator for fields versioning
+type acctNameSpecMap map[string]acctParamsFieldSpec
+
+func (s acctNameSpecMap) SpecByName(name string) FieldSpec {
+ fs := s[name]
+ return &fs
}
func init() {
@@ -673,50 +842,56 @@ func init() {
for fi := Sender; fi < invalidTxnField; fi++ {
TxnFieldNames[fi] = fi.String()
}
- TxnFieldTypes = make([]StackType, int(invalidTxnField))
txnFieldSpecByField = make(map[TxnField]txnFieldSpec, len(TxnFieldNames))
for i, s := range txnFieldSpecs {
if int(s.field) != i {
panic("txnFieldSpecs disjoint with TxnField enum")
}
- TxnFieldTypes[i] = s.ftype
txnFieldSpecByField[s.field] = s
}
- txnFieldSpecByName = make(tfNameSpecMap, len(TxnFieldNames))
+ TxnFieldSpecByName = make(map[string]txnFieldSpec, len(TxnFieldNames))
for i, tfn := range TxnFieldNames {
- txnFieldSpecByName[tfn] = txnFieldSpecByField[TxnField(i)]
+ TxnFieldSpecByName[tfn] = txnFieldSpecByField[TxnField(i)]
}
GlobalFieldNames = make([]string, int(invalidGlobalField))
for i := MinTxnFee; i < invalidGlobalField; i++ {
- GlobalFieldNames[int(i)] = i.String()
+ GlobalFieldNames[i] = i.String()
}
- GlobalFieldTypes = make([]StackType, len(GlobalFieldNames))
globalFieldSpecByField = make(map[GlobalField]globalFieldSpec, len(GlobalFieldNames))
for i, s := range globalFieldSpecs {
if int(s.field) != i {
panic("globalFieldSpecs disjoint with GlobalField enum")
}
- GlobalFieldTypes[i] = s.ftype
globalFieldSpecByField[s.field] = s
}
- globalFieldSpecByName = make(gfNameSpecMap, len(GlobalFieldNames))
+ GlobalFieldSpecByName = make(gfNameSpecMap, len(GlobalFieldNames))
for i, gfn := range GlobalFieldNames {
- globalFieldSpecByName[gfn] = globalFieldSpecByField[GlobalField(i)]
+ GlobalFieldSpecByName[gfn] = globalFieldSpecByField[GlobalField(i)]
}
EcdsaCurveNames = make([]string, int(invalidEcdsaCurve))
for i := Secp256k1; i < invalidEcdsaCurve; i++ {
- EcdsaCurveNames[int(i)] = i.String()
+ EcdsaCurveNames[i] = i.String()
}
ecdsaCurveSpecByField = make(map[EcdsaCurve]ecdsaCurveSpec, len(EcdsaCurveNames))
for _, s := range ecdsaCurveSpecs {
ecdsaCurveSpecByField[s.field] = s
}
- ecdsaCurveSpecByName = make(ecDsaCurveNameSpecMap, len(EcdsaCurveNames))
+ EcdsaCurveSpecByName = make(ecDsaCurveNameSpecMap, len(EcdsaCurveNames))
for i, ahfn := range EcdsaCurveNames {
- ecdsaCurveSpecByName[ahfn] = ecdsaCurveSpecByField[EcdsaCurve(i)]
+ EcdsaCurveSpecByName[ahfn] = ecdsaCurveSpecByField[EcdsaCurve(i)]
+ }
+
+ base64EncodingSpecByField = make(map[Base64Encoding]base64EncodingSpec, len(base64EncodingNames))
+ for _, s := range base64EncodingSpecs {
+ base64EncodingSpecByField[s.field] = s
+ }
+
+ base64EncodingSpecByName = make(base64EncodingSpecMap, len(base64EncodingNames))
+ for i, encoding := range base64EncodingNames {
+ base64EncodingSpecByName[encoding] = base64EncodingSpecByField[Base64Encoding(i)]
}
base64EncodingSpecByField = make(map[Base64Encoding]base64EncodingSpec, len(base64EncodingNames))
@@ -731,47 +906,54 @@ func init() {
AssetHoldingFieldNames = make([]string, int(invalidAssetHoldingField))
for i := AssetBalance; i < invalidAssetHoldingField; i++ {
- AssetHoldingFieldNames[int(i)] = i.String()
+ AssetHoldingFieldNames[i] = i.String()
}
- AssetHoldingFieldTypes = make([]StackType, len(AssetHoldingFieldNames))
assetHoldingFieldSpecByField = make(map[AssetHoldingField]assetHoldingFieldSpec, len(AssetHoldingFieldNames))
for _, s := range assetHoldingFieldSpecs {
- AssetHoldingFieldTypes[int(s.field)] = s.ftype
assetHoldingFieldSpecByField[s.field] = s
}
- assetHoldingFieldSpecByName = make(ahfNameSpecMap, len(AssetHoldingFieldNames))
+ AssetHoldingFieldSpecByName = make(ahfNameSpecMap, len(AssetHoldingFieldNames))
for i, ahfn := range AssetHoldingFieldNames {
- assetHoldingFieldSpecByName[ahfn] = assetHoldingFieldSpecByField[AssetHoldingField(i)]
+ AssetHoldingFieldSpecByName[ahfn] = assetHoldingFieldSpecByField[AssetHoldingField(i)]
}
AssetParamsFieldNames = make([]string, int(invalidAssetParamsField))
for i := AssetTotal; i < invalidAssetParamsField; i++ {
- AssetParamsFieldNames[int(i)] = i.String()
+ AssetParamsFieldNames[i] = i.String()
}
- AssetParamsFieldTypes = make([]StackType, len(AssetParamsFieldNames))
assetParamsFieldSpecByField = make(map[AssetParamsField]assetParamsFieldSpec, len(AssetParamsFieldNames))
for _, s := range assetParamsFieldSpecs {
- AssetParamsFieldTypes[int(s.field)] = s.ftype
assetParamsFieldSpecByField[s.field] = s
}
- assetParamsFieldSpecByName = make(apfNameSpecMap, len(AssetParamsFieldNames))
+ AssetParamsFieldSpecByName = make(apfNameSpecMap, len(AssetParamsFieldNames))
for i, apfn := range AssetParamsFieldNames {
- assetParamsFieldSpecByName[apfn] = assetParamsFieldSpecByField[AssetParamsField(i)]
+ AssetParamsFieldSpecByName[apfn] = assetParamsFieldSpecByField[AssetParamsField(i)]
}
AppParamsFieldNames = make([]string, int(invalidAppParamsField))
for i := AppApprovalProgram; i < invalidAppParamsField; i++ {
- AppParamsFieldNames[int(i)] = i.String()
+ AppParamsFieldNames[i] = i.String()
}
- AppParamsFieldTypes = make([]StackType, len(AppParamsFieldNames))
appParamsFieldSpecByField = make(map[AppParamsField]appParamsFieldSpec, len(AppParamsFieldNames))
for _, s := range appParamsFieldSpecs {
- AppParamsFieldTypes[int(s.field)] = s.ftype
appParamsFieldSpecByField[s.field] = s
}
- appParamsFieldSpecByName = make(appNameSpecMap, len(AppParamsFieldNames))
+ AppParamsFieldSpecByName = make(appNameSpecMap, len(AppParamsFieldNames))
for i, apfn := range AppParamsFieldNames {
- appParamsFieldSpecByName[apfn] = appParamsFieldSpecByField[AppParamsField(i)]
+ AppParamsFieldSpecByName[apfn] = appParamsFieldSpecByField[AppParamsField(i)]
+ }
+
+ AcctParamsFieldNames = make([]string, int(invalidAcctParamsField))
+ for i := AcctBalance; i < invalidAcctParamsField; i++ {
+ AcctParamsFieldNames[i] = i.String()
+ }
+ acctParamsFieldSpecByField = make(map[AcctParamsField]acctParamsFieldSpec, len(AcctParamsFieldNames))
+ for _, s := range acctParamsFieldSpecs {
+ acctParamsFieldSpecByField[s.field] = s
+ }
+ AcctParamsFieldSpecByName = make(acctNameSpecMap, len(AcctParamsFieldNames))
+ for i, apfn := range AcctParamsFieldNames {
+ AcctParamsFieldSpecByName[apfn] = acctParamsFieldSpecByField[AcctParamsField(i)]
}
txnTypeIndexes = make(map[string]uint64, len(TxnTypeNames))
diff --git a/data/transactions/logic/fields_string.go b/data/transactions/logic/fields_string.go
index ffe5ed5b3..34515d78d 100644
--- a/data/transactions/logic/fields_string.go
+++ b/data/transactions/logic/fields_string.go
@@ -1,4 +1,4 @@
-// Code generated by "stringer -type=TxnField,GlobalField,AssetParamsField,AppParamsField,AssetHoldingField,OnCompletionConstType,EcdsaCurve,Base64Encoding -output=fields_string.go"; DO NOT EDIT.
+// Code generated by "stringer -type=TxnField,GlobalField,AssetParamsField,AppParamsField,AcctParamsField,AssetHoldingField,OnCompletionConstType,EcdsaCurve,Base64Encoding -output=fields_string.go"; DO NOT EDIT.
package logic
@@ -99,12 +99,15 @@ func _() {
_ = x[CreatorAddress-9]
_ = x[CurrentApplicationAddress-10]
_ = x[GroupID-11]
- _ = x[invalidGlobalField-12]
+ _ = x[OpcodeBudget-12]
+ _ = x[CallerApplicationID-13]
+ _ = x[CallerApplicationAddress-14]
+ _ = x[invalidGlobalField-15]
}
-const _GlobalField_name = "MinTxnFeeMinBalanceMaxTxnLifeZeroAddressGroupSizeLogicSigVersionRoundLatestTimestampCurrentApplicationIDCreatorAddressCurrentApplicationAddressGroupIDinvalidGlobalField"
+const _GlobalField_name = "MinTxnFeeMinBalanceMaxTxnLifeZeroAddressGroupSizeLogicSigVersionRoundLatestTimestampCurrentApplicationIDCreatorAddressCurrentApplicationAddressGroupIDOpcodeBudgetCallerApplicationIDCallerApplicationAddressinvalidGlobalField"
-var _GlobalField_index = [...]uint8{0, 9, 19, 29, 40, 49, 64, 69, 84, 104, 118, 143, 150, 168}
+var _GlobalField_index = [...]uint8{0, 9, 19, 29, 40, 49, 64, 69, 84, 104, 118, 143, 150, 162, 181, 205, 223}
func (i GlobalField) String() string {
if i >= GlobalField(len(_GlobalField_index)-1) {
@@ -171,6 +174,26 @@ func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
+ _ = x[AcctBalance-0]
+ _ = x[AcctMinBalance-1]
+ _ = x[AcctAuthAddr-2]
+ _ = x[invalidAcctParamsField-3]
+}
+
+const _AcctParamsField_name = "AcctBalanceAcctMinBalanceAcctAuthAddrinvalidAcctParamsField"
+
+var _AcctParamsField_index = [...]uint8{0, 11, 25, 37, 59}
+
+func (i AcctParamsField) String() string {
+ if i < 0 || i >= AcctParamsField(len(_AcctParamsField_index)-1) {
+ return "AcctParamsField(" + strconv.FormatInt(int64(i), 10) + ")"
+ }
+ return _AcctParamsField_name[_AcctParamsField_index[i]:_AcctParamsField_index[i+1]]
+}
+func _() {
+ // An "invalid array index" compiler error signifies that the constant values have changed.
+ // Re-run the stringer command to generate them again.
+ var x [1]struct{}
_ = x[AssetBalance-0]
_ = x[AssetFrozen-1]
_ = x[invalidAssetHoldingField-2]
diff --git a/data/transactions/logic/fields_test.go b/data/transactions/logic/fields_test.go
index 7d65593b6..d40ddc263 100644
--- a/data/transactions/logic/fields_test.go
+++ b/data/transactions/logic/fields_test.go
@@ -23,16 +23,10 @@ import (
"github.com/stretchr/testify/require"
"github.com/algorand/go-algorand/data/basics"
- "github.com/algorand/go-algorand/data/transactions/logictest"
+ "github.com/algorand/go-algorand/data/transactions"
"github.com/algorand/go-algorand/test/partitiontest"
)
-func TestArrayFields(t *testing.T) {
- partitiontest.PartitionTest(t)
- require.Equal(t, len(TxnaFieldNames), len(TxnaFieldTypes))
- require.Equal(t, len(txnaFieldSpecByField), len(TxnaFieldTypes))
-}
-
// ensure v2+ fields fail in TEAL assembler and evaluator on a version before they introduced
// ensure v2+ fields error in v1 program
func TestGlobalFieldsVersions(t *testing.T) {
@@ -47,7 +41,7 @@ func TestGlobalFieldsVersions(t *testing.T) {
}
require.Greater(t, len(fields), 1)
- ledger := logictest.MakeLedger(nil)
+ ledger := MakeLedger(nil)
for _, field := range fields {
text := fmt.Sprintf("global %s", field.field.String())
// check assembler fails if version before introduction
@@ -61,30 +55,26 @@ func TestGlobalFieldsVersions(t *testing.T) {
// check on a version before the field version
preLogicVersion := field.version - 1
- proto := defaultEvalProtoWithVersion(preLogicVersion)
+ proto := makeTestProtoV(preLogicVersion)
if preLogicVersion < appsEnabledVersion {
require.False(t, proto.Application)
}
- ep := defaultEvalParams(nil, nil)
- ep.Proto = &proto
+ ep := defaultEvalParams(nil)
+ ep.Proto = proto
ep.Ledger = ledger
// check failure with version check
- _, err := Eval(ops.Program, ep)
+ _, err := EvalApp(ops.Program, 0, 0, ep)
require.Error(t, err)
require.Contains(t, err.Error(), "greater than protocol supported version")
// check opcodes failures
ops.Program[0] = byte(preLogicVersion) // set version
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid global field")
+ testLogicBytes(t, ops.Program, ep, "invalid global field")
// check opcodes failures on 0 version
ops.Program[0] = 0 // set version to 0
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid global field")
+ testLogicBytes(t, ops.Program, ep, "invalid global field")
}
}
@@ -111,7 +101,7 @@ func TestTxnFieldVersions(t *testing.T) {
}
txnaVersion := uint64(appsEnabledVersion)
- ledger := logictest.MakeLedger(nil)
+ ledger := MakeLedger(nil)
txn := makeSampleTxn()
// We'll reject too early if we have a nonzero RekeyTo, because that
// field must be zero for every txn in the group if this is an old
@@ -125,7 +115,7 @@ func TestTxnFieldVersions(t *testing.T) {
text := fmt.Sprintf(command, field)
asmError := asmDefaultError
txnaMode := false
- if _, ok := txnaFieldSpecByField[fs.field]; ok {
+ if fs.array {
text = fmt.Sprintf(subs[command], field)
asmError = "...txna opcode was introduced in ..."
txnaMode = true
@@ -140,42 +130,75 @@ func TestTxnFieldVersions(t *testing.T) {
}
testLine(t, text, fs.version, "")
- ops, err := AssembleStringWithVersion(text, AssemblerMaxVersion)
- require.NoError(t, err)
+ ops := testProg(t, text, AssemblerMaxVersion)
preLogicVersion := fs.version - 1
- proto := defaultEvalProtoWithVersion(preLogicVersion)
+ proto := makeTestProtoV(preLogicVersion)
if preLogicVersion < appsEnabledVersion {
require.False(t, proto.Application)
}
- ep := defaultEvalParams(nil, nil)
- ep.Proto = &proto
+ ep := defaultEvalParams(nil)
+ ep.Proto = proto
ep.Ledger = ledger
- ep.TxnGroup = txgroup
+ ep.TxnGroup = transactions.WrapSignedTxnsWithAD(txgroup)
// check failure with version check
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
- require.Contains(t, err.Error(), "greater than protocol supported version")
+ testLogicBytes(t, ops.Program, ep,
+ "greater than protocol supported version", "greater than protocol supported version")
// check opcodes failures
ops.Program[0] = byte(preLogicVersion) // set version
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
+ checkErr := ""
+ evalErr := "invalid txn field"
if txnaMode && preLogicVersion < txnaVersion {
- require.Contains(t, err.Error(), "illegal opcode")
- } else {
- require.Contains(t, err.Error(), "invalid txn field")
+ checkErr = "illegal opcode"
+ evalErr = "illegal opcode"
}
+ testLogicBytes(t, ops.Program, ep, checkErr, evalErr)
// check opcodes failures on 0 version
ops.Program[0] = 0 // set version to 0
- _, err = Eval(ops.Program, ep)
- require.Error(t, err)
+ checkErr = ""
+ evalErr = "invalid txn field"
if txnaMode {
- require.Contains(t, err.Error(), "illegal opcode")
+ checkErr = "illegal opcode"
+ evalErr = "illegal opcode"
+ }
+ testLogicBytes(t, ops.Program, ep, checkErr, evalErr)
+ }
+ }
+}
+
+// TestTxnEffectsAvailable ensures that LogicSigs can not use "effects" fields
+// (ever). And apps can only use effects fields with `txn` after
+// txnEffectsVersion. (itxn could use them earlier)
+func TestTxnEffectsAvailable(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ t.Parallel()
+ for _, fs := range txnFieldSpecByField {
+ if !fs.effects {
+ continue
+ }
+ source := fmt.Sprintf("txn %s", fs.field.String())
+ if fs.array {
+ source = fmt.Sprintf("txna %s 0", fs.field.String())
+ }
+ for v := fs.version; v <= AssemblerMaxVersion; v++ {
+ ops := testProg(t, source, v)
+ ep := defaultEvalParams(nil)
+ ep.TxnGroup[0].Lsig.Logic = ops.Program
+ _, err := EvalSignature(0, ep)
+ require.Error(t, err)
+ ep.Ledger = MakeLedger(nil)
+ _, err = EvalApp(ops.Program, 0, 0, ep)
+ if v < txnEffectsVersion {
+ require.Error(t, err)
} else {
- require.Contains(t, err.Error(), "invalid txn field")
+ if fs.array {
+ continue // Array (Logs) will be 0 length, so will fail anyway
+ }
+ require.NoError(t, err)
}
}
}
@@ -199,19 +222,13 @@ func TestAssetParamsFieldsVersions(t *testing.T) {
text := fmt.Sprintf("intcblock 0 1; intc_0; asset_params_get %s; pop; pop; intc_1", field.field.String())
// check assembler fails if version before introduction
for v := uint64(2); v <= AssemblerMaxVersion; v++ {
- ep, _ := makeSampleEnv()
+ ep, _, _ := makeSampleEnv()
ep.Proto.LogicSigVersion = v
if field.version > v {
- testProg(t, text, v, expect{3, "...available in version..."})
+ testProg(t, text, v, Expect{3, "...available in version..."})
ops := testProg(t, text, field.version) // assemble in the future
- scratch := ops.Program
- scratch[0] = byte(v) // but we'll tweak the version byte back to v
- err := CheckStateful(scratch, ep)
- require.NoError(t, err)
- pass, err := EvalStateful(scratch, ep) // so eval fails on future field
- require.False(t, pass)
- require.Error(t, err)
- require.Contains(t, err.Error(), "invalid asset_params_get field")
+ ops.Program[0] = byte(v)
+ testAppBytes(t, ops.Program, ep, "invalid asset_params_get field")
} else {
testProg(t, text, v)
testApp(t, text, ep)
diff --git a/data/transactions/logictest/ledger.go b/data/transactions/logic/ledger_test.go
index 3ea39cd00..54929b3b8 100644
--- a/data/transactions/logictest/ledger.go
+++ b/data/transactions/logic/ledger_test.go
@@ -14,9 +14,10 @@
// You should have received a copy of the GNU Affero General Public License
// along with go-algorand. If not, see <https://www.gnu.org/licenses/>.
-package logictest
+package logic
import (
+ "errors"
"fmt"
"math/rand"
@@ -58,22 +59,13 @@ type asaParams struct {
Creator basics.Address
}
-// Ledger is a convenient mock ledger that is used by
-// data/transactions/logic It is in its own package so that it can be
-// used by people developing teal code that need a fast testing setup,
-// rather than running against a real network. It also might be
-// expanded to support the Balances interface so that we have fewer
-// mocks doing similar things. By putting it here, it is publicly
-// exported, but will not be imported by non-test code, so won't bloat
-// binary.
+// Ledger is a fake ledger that is "good enough" to reasonably test AVM programs.
type Ledger struct {
- balances map[basics.Address]balanceRecord
- applications map[basics.AppIndex]appParams
- assets map[basics.AssetIndex]asaParams
- trackedCreatables map[int]basics.CreatableIndex
- appID basics.AppIndex
- mods map[basics.AppIndex]map[string]basics.ValueDelta
- rnd basics.Round
+ balances map[basics.Address]balanceRecord
+ applications map[basics.AppIndex]appParams
+ assets map[basics.AssetIndex]asaParams
+ mods map[basics.AppIndex]map[string]basics.ValueDelta
+ rnd basics.Round
}
// MakeLedger constructs a Ledger with the given balances.
@@ -85,7 +77,6 @@ func MakeLedger(balances map[basics.Address]uint64) *Ledger {
}
l.applications = make(map[basics.AppIndex]appParams)
l.assets = make(map[basics.AssetIndex]asaParams)
- l.trackedCreatables = make(map[int]basics.CreatableIndex)
l.mods = make(map[basics.AppIndex]map[string]basics.ValueDelta)
return l
}
@@ -105,25 +96,18 @@ func (l *Ledger) NewAccount(addr basics.Address, balance uint64) {
}
// NewApp add a new AVM app to the Ledger, and arranges so that future
-// executions will act as though they are that app. It only sets up
-// the id and schema, it inserts no code, since testing will want to
-// try many different code sequences.
+// executions will act as though they are that app. In most uses, it only sets
+// up the id and schema but no code, as testing will want to try many different
+// code sequences.
func (l *Ledger) NewApp(creator basics.Address, appID basics.AppIndex, params basics.AppParams) {
- l.appID = appID
params = params.Clone()
if params.GlobalState == nil {
params.GlobalState = make(basics.TealKeyValue)
}
l.applications[appID] = appParams{
Creator: creator,
- AppParams: params.Clone(),
+ AppParams: params,
}
- br, ok := l.balances[creator]
- if !ok {
- br = makeBalanceRecord(creator, 0)
- }
- br.locals[appID] = make(map[string]basics.TealValue)
- l.balances[creator] = br
}
// NewAsset adds an asset with the given id and params to the ledger.
@@ -140,9 +124,12 @@ func (l *Ledger) NewAsset(creator basics.Address, assetID basics.AssetIndex, par
l.balances[creator] = br
}
-// freshID gets a new creatable ID that isn't in use
-func (l *Ledger) freshID() uint64 {
- for try := l.appID + 1; true; try++ {
+const firstTestID = 5000
+
+// Counter implements LedgerForLogic, but it not really a txn counter, but is
+// sufficient for the logic package.
+func (l *Ledger) Counter() uint64 {
+ for try := firstTestID; true; try++ {
if _, ok := l.assets[basics.AssetIndex(try)]; ok {
continue
}
@@ -166,6 +153,9 @@ func (l *Ledger) NewHolding(addr basics.Address, assetID uint64, amount uint64,
// NewLocals essentially "opts in" an address to an app id.
func (l *Ledger) NewLocals(addr basics.Address, appID uint64) {
+ if _, ok := l.balances[addr]; !ok {
+ l.balances[addr] = makeBalanceRecord(addr, 0)
+ }
l.balances[addr].locals[basics.AppIndex(appID)] = basics.TealKeyValue{}
}
@@ -269,12 +259,9 @@ func (l *Ledger) Authorizer(addr basics.Address) (basics.Address, error) {
// GetGlobal returns the current value of a global in an app, taking
// into account the mods created by earlier teal execution.
func (l *Ledger) GetGlobal(appIdx basics.AppIndex, key string) (basics.TealValue, bool, error) {
- if appIdx == basics.AppIndex(0) {
- appIdx = l.appID
- }
params, ok := l.applications[appIdx]
if !ok {
- return basics.TealValue{}, false, fmt.Errorf("no such app")
+ return basics.TealValue{}, false, fmt.Errorf("no such app %d", appIdx)
}
// return most recent value if available
@@ -294,11 +281,10 @@ func (l *Ledger) GetGlobal(appIdx basics.AppIndex, key string) (basics.TealValue
// SetGlobal "sets" a global, but only through the mods mechanism, so
// it can be removed with Reset()
-func (l *Ledger) SetGlobal(key string, value basics.TealValue) error {
- appIdx := l.appID
+func (l *Ledger) SetGlobal(appIdx basics.AppIndex, key string, value basics.TealValue) error {
params, ok := l.applications[appIdx]
if !ok {
- return fmt.Errorf("no such app")
+ return fmt.Errorf("no such app %d", appIdx)
}
// if writing the same value, return
@@ -319,11 +305,10 @@ func (l *Ledger) SetGlobal(key string, value basics.TealValue) error {
// DelGlobal "deletes" a global, but only through the mods mechanism, so
// the deletion can be Reset()
-func (l *Ledger) DelGlobal(key string) error {
- appIdx := l.appID
+func (l *Ledger) DelGlobal(appIdx basics.AppIndex, key string) error {
params, ok := l.applications[appIdx]
if !ok {
- return fmt.Errorf("no such app")
+ return fmt.Errorf("no such app %d", appIdx)
}
exist := false
@@ -349,9 +334,6 @@ func (l *Ledger) DelGlobal(key string) error {
// GetLocal returns the current value bound to a local key, taking
// into account mods caused by earlier executions.
func (l *Ledger) GetLocal(addr basics.Address, appIdx basics.AppIndex, key string, accountIdx uint64) (basics.TealValue, bool, error) {
- if appIdx == 0 {
- appIdx = l.appID
- }
br, ok := l.balances[addr]
if !ok {
return basics.TealValue{}, false, fmt.Errorf("no such address")
@@ -377,9 +359,7 @@ func (l *Ledger) GetLocal(addr basics.Address, appIdx basics.AppIndex, key strin
// SetLocal "sets" the current value bound to a local key using the
// mods mechanism, so it can be Reset()
-func (l *Ledger) SetLocal(addr basics.Address, key string, value basics.TealValue, accountIdx uint64) error {
- appIdx := l.appID
-
+func (l *Ledger) SetLocal(addr basics.Address, appIdx basics.AppIndex, key string, value basics.TealValue, accountIdx uint64) error {
br, ok := l.balances[addr]
if !ok {
return fmt.Errorf("no such address")
@@ -407,9 +387,7 @@ func (l *Ledger) SetLocal(addr basics.Address, key string, value basics.TealValu
// DelLocal "deletes" the current value bound to a local key using the
// mods mechanism, so it can be Reset()
-func (l *Ledger) DelLocal(addr basics.Address, key string, accountIdx uint64) error {
- appIdx := l.appID
-
+func (l *Ledger) DelLocal(addr basics.Address, appIdx basics.AppIndex, key string, accountIdx uint64) error {
br, ok := l.balances[addr]
if !ok {
return fmt.Errorf("no such address")
@@ -442,9 +420,6 @@ func (l *Ledger) DelLocal(addr basics.Address, key string, accountIdx uint64) er
// from NewLocals, but potentially from executing AVM inner
// transactions.
func (l *Ledger) OptedIn(addr basics.Address, appIdx basics.AppIndex) (bool, error) {
- if appIdx == 0 {
- appIdx = l.appID
- }
br, ok := l.balances[addr]
if !ok {
return false, fmt.Errorf("no such address")
@@ -453,18 +428,6 @@ func (l *Ledger) OptedIn(addr basics.Address, appIdx basics.AppIndex) (bool, err
return ok, nil
}
-// SetTrackedCreatable remembers that the given cl "happened" in txn
-// groupIdx of the group, for use by GetCreatableID.
-func (l *Ledger) SetTrackedCreatable(groupIdx int, cl basics.CreatableLocator) {
- l.trackedCreatables[groupIdx] = cl.Index
-}
-
-// GetCreatableID returns the creatable constructed in a given transaction
-// slot. For the test ledger, that's been set up by SetTrackedCreatable
-func (l *Ledger) GetCreatableID(groupIdx int) basics.CreatableIndex {
- return l.trackedCreatables[groupIdx]
-}
-
// AssetHolding gives the amount of an ASA held by an account, or
// error if the account is not opted into the asset.
func (l *Ledger) AssetHolding(addr basics.Address, assetID basics.AssetIndex) (basics.AssetHolding, error) {
@@ -490,43 +453,7 @@ func (l *Ledger) AppParams(appID basics.AppIndex) (basics.AppParams, basics.Addr
if app, ok := l.applications[appID]; ok {
return app.AppParams, app.Creator, nil
}
- return basics.AppParams{}, basics.Address{}, fmt.Errorf("no such app")
-}
-
-// ApplicationID gives ID of the "currently running" app. For this
-// test ledger, that is chosen explicitly.
-func (l *Ledger) ApplicationID() basics.AppIndex {
- return l.appID
-}
-
-// CreatorAddress returns of the address that created the "currently running" app.
-func (l *Ledger) CreatorAddress() basics.Address {
- _, addr, _ := l.AppParams(l.appID)
- return addr
-}
-
-// GetDelta translates the mods set by AVM execution into the standard
-// format of an EvalDelta.
-func (l *Ledger) GetDelta(txn *transactions.Transaction) (evalDelta transactions.EvalDelta, err error) {
- if tkv, ok := l.mods[l.appID]; ok {
- evalDelta.GlobalDelta = tkv
- }
- if len(txn.Accounts) > 0 {
- accounts := make(map[basics.Address]int)
- accounts[txn.Sender] = 0
- for idx, addr := range txn.Accounts {
- accounts[addr] = idx + 1
- }
- evalDelta.LocalDeltas = make(map[uint64]basics.StateDelta)
- for addr, br := range l.balances {
- if idx, ok := accounts[addr]; ok {
- if delta, ok := br.mods[l.appID]; ok {
- evalDelta.LocalDeltas[uint64(idx)] = delta
- }
- }
- }
- }
- return
+ return basics.AppParams{}, basics.Address{}, fmt.Errorf("no such app %d", appID)
}
func (l *Ledger) move(from basics.Address, to basics.Address, amount uint64) error {
@@ -666,11 +593,12 @@ func (l *Ledger) axfer(from basics.Address, xfer transactions.AssetTransferTxnFi
return nil
}
-func (l *Ledger) acfg(from basics.Address, cfg transactions.AssetConfigTxnFields) (transactions.ApplyData, error) {
+func (l *Ledger) acfg(from basics.Address, cfg transactions.AssetConfigTxnFields, ad *transactions.ApplyData) error {
if cfg.ConfigAsset == 0 {
- aid := basics.AssetIndex(l.freshID())
+ aid := basics.AssetIndex(l.Counter())
l.NewAsset(from, aid, cfg.AssetParams)
- return transactions.ApplyData{ConfigAsset: aid}, nil
+ ad.ConfigAsset = aid
+ return nil
}
// This is just a mock. We don't check all the rules about
// not setting fields that have been zeroed. Nor do we keep
@@ -679,7 +607,7 @@ func (l *Ledger) acfg(from basics.Address, cfg transactions.AssetConfigTxnFields
Creator: from,
AssetParams: cfg.AssetParams,
}
- return transactions.ApplyData{}, nil
+ return nil
}
func (l *Ledger) afrz(from basics.Address, frz transactions.AssetFreezeTxnFields) error {
@@ -693,55 +621,124 @@ func (l *Ledger) afrz(from basics.Address, frz transactions.AssetFreezeTxnFields
}
br, ok := l.balances[frz.FreezeAccount]
if !ok {
- return fmt.Errorf("%s does not hold anything", from)
+ return fmt.Errorf("%s does not hold Asset (%d)", frz.FreezeAccount, aid)
}
holding, ok := br.holdings[aid]
if !ok {
- return fmt.Errorf("%s does not hold Asset (%d)", from, aid)
+ return fmt.Errorf("%s does not hold Asset (%d)", frz.FreezeAccount, aid)
}
holding.Frozen = frz.AssetFrozen
br.holdings[aid] = holding
return nil
}
-/* It's gross to reimplement this here, rather than have a way to use
- a ledger that's backed by our mock, but uses the "real" code
- (cowRoundState which implements Balances), as a better test. To
- allow that, we need to move our mocks into separate packages so
- they can be combined in yet *another* package, and avoid circular
- imports.
+func (l *Ledger) appl(from basics.Address, appl transactions.ApplicationCallTxnFields, ad *transactions.ApplyData, gi int, ep *EvalParams) error {
+ aid := appl.ApplicationID
+ if aid == 0 {
+ aid = basics.AppIndex(l.Counter())
+ params := basics.AppParams{
+ ApprovalProgram: appl.ApprovalProgram,
+ ClearStateProgram: appl.ClearStateProgram,
+ GlobalState: map[string]basics.TealValue{},
+ StateSchemas: basics.StateSchemas{
+ LocalStateSchema: basics.StateSchema{
+ NumUint: appl.LocalStateSchema.NumUint,
+ NumByteSlice: appl.LocalStateSchema.NumByteSlice,
+ },
+ GlobalStateSchema: basics.StateSchema{
+ NumUint: appl.GlobalStateSchema.NumUint,
+ NumByteSlice: appl.GlobalStateSchema.NumByteSlice,
+ },
+ },
+ ExtraProgramPages: appl.ExtraProgramPages,
+ }
+ l.NewApp(from, aid, params)
+ ad.ApplicationID = aid
+ }
- This is currently unable to fill the ApplyData objects. That would
- require a whole new level of code duplication.
-*/
+ if appl.OnCompletion == transactions.OptInOC {
+ br, ok := l.balances[from]
+ if !ok {
+ return errors.New("no account")
+ }
+ br.locals[aid] = make(map[string]basics.TealValue)
+ }
-// Perform causes txn to "occur" against the ledger. The returned ad is empty.
-func (l *Ledger) Perform(txn *transactions.Transaction, spec transactions.SpecialAddresses) (transactions.ApplyData, error) {
- var ad transactions.ApplyData
+ // Execute the Approval program
+ params, ok := l.applications[aid]
+ if !ok {
+ return errors.New("No application")
+ }
+ pass, cx, err := EvalContract(params.ApprovalProgram, gi, aid, ep)
+ if err != nil {
+ return err
+ }
+ if !pass {
+ return errors.New("Approval program failed")
+ }
+ ad.EvalDelta = cx.Txn.EvalDelta
- err := l.move(txn.Sender, spec.FeeSink, txn.Fee.Raw)
+ switch appl.OnCompletion {
+ case transactions.NoOpOC:
+ case transactions.OptInOC:
+ // done earlier so locals could be changed
+ case transactions.CloseOutOC:
+ // get the local state, error if not exists, delete it
+ br, ok := l.balances[from]
+ if !ok {
+ return errors.New("no account")
+ }
+ _, ok = br.locals[aid]
+ if !ok {
+ return errors.New("not opted in")
+ }
+ delete(br.locals, aid)
+ case transactions.DeleteApplicationOC:
+ // get the global object, delete it
+ _, ok := l.applications[aid]
+ if !ok {
+ return errors.New("no app")
+ }
+ delete(l.applications, aid)
+ case transactions.UpdateApplicationOC:
+ app, ok := l.applications[aid]
+ if !ok {
+ return errors.New("no app")
+ }
+ app.ApprovalProgram = appl.ApprovalProgram
+ app.ClearStateProgram = appl.ClearStateProgram
+ l.applications[aid] = app
+ }
+ return nil
+}
+
+// Perform causes txn to "occur" against the ledger.
+func (l *Ledger) Perform(gi int, ep *EvalParams) error {
+ txn := &ep.TxnGroup[gi]
+ err := l.move(txn.Txn.Sender, ep.Specials.FeeSink, txn.Txn.Fee.Raw)
if err != nil {
- return ad, err
+ return err
}
- err = l.rekey(txn)
+ err = l.rekey(&txn.Txn)
if err != nil {
- return ad, err
+ return err
}
- switch txn.Type {
+ switch txn.Txn.Type {
case protocol.PaymentTx:
- err = l.pay(txn.Sender, txn.PaymentTxnFields)
+ return l.pay(txn.Txn.Sender, txn.Txn.PaymentTxnFields)
case protocol.AssetTransferTx:
- err = l.axfer(txn.Sender, txn.AssetTransferTxnFields)
+ return l.axfer(txn.Txn.Sender, txn.Txn.AssetTransferTxnFields)
case protocol.AssetConfigTx:
- ad, err = l.acfg(txn.Sender, txn.AssetConfigTxnFields)
+ return l.acfg(txn.Txn.Sender, txn.Txn.AssetConfigTxnFields, &txn.ApplyData)
case protocol.AssetFreezeTx:
- err = l.afrz(txn.Sender, txn.AssetFreezeTxnFields)
+ return l.afrz(txn.Txn.Sender, txn.Txn.AssetFreezeTxnFields)
+ case protocol.ApplicationCallTx:
+ return l.appl(txn.Txn.Sender, txn.Txn.ApplicationCallTxnFields, &txn.ApplyData, gi, ep)
default:
- err = fmt.Errorf("%s txn in AVM", txn.Type)
+ return fmt.Errorf("%s txn in AVM", txn.Txn.Type)
}
- return ad, err
}
// Get() through allocated() implement cowForLogicLedger, so we should
diff --git a/data/transactions/logic/opcodes.go b/data/transactions/logic/opcodes.go
index e426a253a..80b73b189 100644
--- a/data/transactions/logic/opcodes.go
+++ b/data/transactions/logic/opcodes.go
@@ -42,6 +42,19 @@ const backBranchEnabledVersion = 4
// using an index into arrays.
const directRefEnabledVersion = 4
+// innerAppsEnabledVersion is first version that allowed inner app calls. No old
+// apps should be called as inner apps.
+const innerAppsEnabledVersion = 6
+
+// txnEffectsVersion is first version that allowed txn opcode to access
+// "effects" (ApplyData info)
+const txnEffectsVersion = 6
+
+// createdResourcesVersion is the first version that allows access to assets and
+// applications that were created in the same group, despite them not being in
+// the Foreign arrays.
+const createdResourcesVersion = 6
+
// opDetails records details such as non-standard costs, immediate
// arguments, or dynamic layout controlled by a check function.
type opDetails struct {
@@ -213,7 +226,7 @@ var OpSpecs = []OpSpec{
// Group scratch space access
{0x3a, "gload", opGload, asmDefault, disDefault, nil, oneAny, 4, runModeApplication, immediates("t", "i")},
{0x3b, "gloads", opGloads, asmDefault, disDefault, oneInt, oneAny, 4, runModeApplication, immediates("i")},
- // Access creatable IDs
+ // Access creatable IDs (consider deprecating, as txn CreatedAssetID, CreatedApplicationID should be enough
{0x3c, "gaid", opGaid, asmDefault, disDefault, nil, oneInt, 4, runModeApplication, immediates("t")},
{0x3d, "gaids", opGaids, asmDefault, disDefault, oneInt, oneInt, 4, runModeApplication, opDefault},
@@ -269,10 +282,11 @@ var OpSpecs = []OpSpec{
{0x68, "app_local_del", opAppLocalDel, asmDefault, disDefault, oneAny.plus(oneBytes), nil, directRefEnabledVersion, runModeApplication, opDefault},
{0x69, "app_global_del", opAppGlobalDel, asmDefault, disDefault, oneBytes, nil, 2, runModeApplication, opDefault},
- {0x70, "asset_holding_get", opAssetHoldingGet, assembleAssetHolding, disAssetHolding, twoInts, oneAny.plus(oneInt), 2, runModeApplication, immediates("i")},
- {0x70, "asset_holding_get", opAssetHoldingGet, assembleAssetHolding, disAssetHolding, oneAny.plus(oneInt), oneAny.plus(oneInt), directRefEnabledVersion, runModeApplication, immediates("i")},
- {0x71, "asset_params_get", opAssetParamsGet, assembleAssetParams, disAssetParams, oneInt, oneAny.plus(oneInt), 2, runModeApplication, immediates("i")},
- {0x72, "app_params_get", opAppParamsGet, assembleAppParams, disAppParams, oneInt, oneAny.plus(oneInt), 5, runModeApplication, immediates("i")},
+ {0x70, "asset_holding_get", opAssetHoldingGet, assembleAssetHolding, disAssetHolding, twoInts, oneAny.plus(oneInt), 2, runModeApplication, immediates("f")},
+ {0x70, "asset_holding_get", opAssetHoldingGet, assembleAssetHolding, disAssetHolding, oneAny.plus(oneInt), oneAny.plus(oneInt), directRefEnabledVersion, runModeApplication, immediates("f")},
+ {0x71, "asset_params_get", opAssetParamsGet, assembleAssetParams, disAssetParams, oneInt, oneAny.plus(oneInt), 2, runModeApplication, immediates("f")},
+ {0x72, "app_params_get", opAppParamsGet, assembleAppParams, disAppParams, oneInt, oneAny.plus(oneInt), 5, runModeApplication, immediates("f")},
+ {0x73, "acct_params_get", opAcctParamsGet, assembleAcctParams, disAcctParams, oneInt, oneAny.plus(oneInt), 6, runModeApplication, immediates("f")},
{0x78, "min_balance", opMinBalance, asmDefault, disDefault, oneInt, oneInt, 3, runModeApplication, opDefault},
{0x78, "min_balance", opMinBalance, asmDefault, disDefault, oneAny, oneInt, directRefEnabledVersion, runModeApplication, opDefault},
@@ -293,6 +307,7 @@ var OpSpecs = []OpSpec{
{0x93, "bitlen", opBitLen, asmDefault, disDefault, oneAny, oneInt, 4, modeAny, opDefault},
{0x94, "exp", opExp, asmDefault, disDefault, twoInts, oneInt, 4, modeAny, opDefault},
{0x95, "expw", opExpw, asmDefault, disDefault, twoInts, twoInts, 4, modeAny, costly(10)},
+ {0x96, "bsqrt", opBytesSqrt, asmDefault, disDefault, oneBytes, oneBytes, 6, modeAny, costly(40)},
// Byteslice math.
{0xa0, "b+", opBytesPlus, asmDefault, disDefault, twoBytes, oneBytes, 4, modeAny, costly(10)},
@@ -320,12 +335,15 @@ var OpSpecs = []OpSpec{
{0xb4, "itxn", opItxn, asmItxn, disTxn, nil, oneAny, 5, runModeApplication, immediates("f")},
{0xb5, "itxna", opItxna, asmItxna, disTxna, nil, oneAny, 5, runModeApplication, immediates("f", "i")},
{0xb6, "itxn_next", opTxNext, asmDefault, disDefault, nil, nil, 6, runModeApplication, opDefault},
+ {0xb7, "gitxn", opGitxn, asmGitxn, disGtxn, nil, oneAny, 6, runModeApplication, immediates("t", "f")},
+ {0xb8, "gitxna", opGitxna, asmGitxna, disGtxna, nil, oneAny, 6, runModeApplication, immediates("t", "f", "i")},
// Dynamic indexing
{0xc0, "txnas", opTxnas, assembleTxnas, disTxn, oneInt, oneAny, 5, modeAny, immediates("f")},
{0xc1, "gtxnas", opGtxnas, assembleGtxnas, disGtxn, oneInt, oneAny, 5, modeAny, immediates("t", "f")},
{0xc2, "gtxnsas", opGtxnsas, assembleGtxnsas, disTxn, twoInts, oneAny, 5, modeAny, immediates("f")},
{0xc3, "args", opArgs, asmDefault, disDefault, oneInt, oneBytes, 5, runModeSignature, opDefault},
+ {0xc4, "gloadss", opGloadss, asmDefault, disDefault, twoInts, oneAny, 6, runModeApplication, opDefault},
}
type sortByOpcode []OpSpec
diff --git a/data/transactions/signedtxn.go b/data/transactions/signedtxn.go
index 6ade34a7e..bbe445f9f 100644
--- a/data/transactions/signedtxn.go
+++ b/data/transactions/signedtxn.go
@@ -133,7 +133,7 @@ func WrapSignedTxnsWithAD(txgroup []SignedTxn) []SignedTxnWithAD {
// FeeCredit computes the amount of fee credit that can be spent on
// inner txns because it was more than required.
-func FeeCredit(txgroup []SignedTxn, minFee uint64) (uint64, error) {
+func FeeCredit(txgroup []SignedTxnWithAD, minFee uint64) (uint64, error) {
minFeeCount := uint64(0)
feesPaid := uint64(0)
for _, stxn := range txgroup {
diff --git a/data/transactions/transaction.go b/data/transactions/transaction.go
index 9a7d97899..d617160e8 100644
--- a/data/transactions/transaction.go
+++ b/data/transactions/transaction.go
@@ -17,6 +17,7 @@
package transactions
import (
+ "encoding/binary"
"errors"
"fmt"
@@ -180,6 +181,19 @@ func (tx Transaction) ID() Txid {
return Txid(crypto.Hash(enc))
}
+// InnerID returns something akin to Txid, but folds in the parent Txid and the
+// index of the inner call.
+func (tx Transaction) InnerID(parent Txid, index int) Txid {
+ input := append(protocol.GetEncodingBuf(), []byte(protocol.Transaction)...)
+ input = append(input, parent[:]...)
+ buf := make([]byte, 8)
+ binary.BigEndian.PutUint64(buf, uint64(index))
+ input = append(input, buf...)
+ enc := tx.MarshalMsg(input)
+ defer protocol.PutEncodingBuf(enc)
+ return Txid(crypto.Hash(enc))
+}
+
// Sign signs a transaction using a given Account's secrets.
func (tx Transaction) Sign(secrets *crypto.SignatureSecrets) SignedTxn {
sig := secrets.Sign(tx)
@@ -405,11 +419,11 @@ func (tx Transaction) WellFormed(spec SpecialAddresses, proto config.ConsensusPa
// Limit the sum of all types of references that bring in account records
if len(tx.Accounts)+len(tx.ForeignApps)+len(tx.ForeignAssets) > proto.MaxAppTotalTxnReferences {
- return fmt.Errorf("tx has too many references, max is %d", proto.MaxAppTotalTxnReferences)
+ return fmt.Errorf("tx references exceed MaxAppTotalTxnReferences = %d", proto.MaxAppTotalTxnReferences)
}
if tx.ExtraProgramPages > uint32(proto.MaxExtraAppProgramPages) {
- return fmt.Errorf("tx.ExtraProgramPages too large, max number of extra pages is %d", proto.MaxExtraAppProgramPages)
+ return fmt.Errorf("tx.ExtraProgramPages exceeds MaxExtraAppProgramPages = %d", proto.MaxExtraAppProgramPages)
}
lap := len(tx.ApprovalProgram)
diff --git a/data/transactions/transaction_test.go b/data/transactions/transaction_test.go
index b01d7cb3a..037087ee9 100644
--- a/data/transactions/transaction_test.go
+++ b/data/transactions/transaction_test.go
@@ -305,7 +305,7 @@ func TestWellFormedErrors(t *testing.T) {
},
spec: specialAddr,
proto: protoV27,
- expectedError: fmt.Errorf("tx.ExtraProgramPages too large, max number of extra pages is %d", protoV27.MaxExtraAppProgramPages),
+ expectedError: fmt.Errorf("tx.ExtraProgramPages exceeds MaxExtraAppProgramPages = %d", protoV27.MaxExtraAppProgramPages),
},
{
tx: Transaction{
@@ -392,7 +392,7 @@ func TestWellFormedErrors(t *testing.T) {
},
spec: specialAddr,
proto: futureProto,
- expectedError: fmt.Errorf("tx.ExtraProgramPages too large, max number of extra pages is %d", futureProto.MaxExtraAppProgramPages),
+ expectedError: fmt.Errorf("tx.ExtraProgramPages exceeds MaxExtraAppProgramPages = %d", futureProto.MaxExtraAppProgramPages),
},
{
tx: Transaction{
@@ -457,7 +457,7 @@ func TestWellFormedErrors(t *testing.T) {
},
spec: specialAddr,
proto: futureProto,
- expectedError: fmt.Errorf("tx has too many references, max is 8"),
+ expectedError: fmt.Errorf("tx references exceed MaxAppTotalTxnReferences = 8"),
},
{
tx: Transaction{
diff --git a/data/transactions/verify/txn.go b/data/transactions/verify/txn.go
index 3564a45d3..3ffc217c1 100644
--- a/data/transactions/verify/txn.go
+++ b/data/transactions/verify/txn.go
@@ -85,7 +85,7 @@ func PrepareGroupContext(group []transactions.SignedTxn, contextHdr bookkeeping.
},
consensusVersion: contextHdr.CurrentProtocol,
consensusParams: consensusParams,
- minTealVersion: logic.ComputeMinTealVersion(group),
+ minTealVersion: logic.ComputeMinTealVersion(transactions.WrapSignedTxnsWithAD(group), false),
signedGroupTxns: group,
}, nil
}
@@ -289,14 +289,13 @@ func LogicSigSanityCheckBatchVerify(txn *transactions.SignedTxn, groupIndex int,
if groupIndex < 0 {
return errors.New("Negative groupIndex")
}
+ txngroup := transactions.WrapSignedTxnsWithAD(groupCtx.signedGroupTxns)
ep := logic.EvalParams{
- Txn: txn,
Proto: &groupCtx.consensusParams,
- TxnGroup: groupCtx.signedGroupTxns,
- GroupIndex: uint64(groupIndex),
+ TxnGroup: txngroup,
MinTealVersion: &groupCtx.minTealVersion,
}
- err := logic.Check(lsig.Logic, ep)
+ err := logic.CheckSignature(groupIndex, &ep)
if err != nil {
return err
}
@@ -347,13 +346,11 @@ func logicSigBatchVerify(txn *transactions.SignedTxn, groupIndex int, groupCtx *
return errors.New("Negative groupIndex")
}
ep := logic.EvalParams{
- Txn: txn,
Proto: &groupCtx.consensusParams,
- TxnGroup: groupCtx.signedGroupTxns,
- GroupIndex: uint64(groupIndex),
+ TxnGroup: transactions.WrapSignedTxnsWithAD(groupCtx.signedGroupTxns),
MinTealVersion: &groupCtx.minTealVersion,
}
- pass, err := logic.Eval(txn.Lsig.Logic, ep)
+ pass, err := logic.EvalSignature(groupIndex, &ep)
if err != nil {
logicErrTotal.Inc(nil)
return fmt.Errorf("transaction %v: rejected by logic err=%v", txn.ID(), err)
diff --git a/data/transactions/verify/verifiedTxnCache.go b/data/transactions/verify/verifiedTxnCache.go
index c5cf079e1..fdd502833 100644
--- a/data/transactions/verify/verifiedTxnCache.go
+++ b/data/transactions/verify/verifiedTxnCache.go
@@ -128,7 +128,7 @@ func (v *verifiedTransactionCache) GetUnverifiedTranscationGroups(txnGroups [][]
for txnGroupIndex := 0; txnGroupIndex < len(txnGroups); txnGroupIndex++ {
signedTxnGroup := txnGroups[txnGroupIndex]
verifiedTxn := 0
- groupCtx.minTealVersion = logic.ComputeMinTealVersion(signedTxnGroup)
+ groupCtx.minTealVersion = logic.ComputeMinTealVersion(transactions.WrapSignedTxnsWithAD(signedTxnGroup), false)
baseBucket := v.base
for txnIdx := 0; txnIdx < len(signedTxnGroup); txnIdx++ {
diff --git a/ledger/acctupdates.go b/ledger/acctupdates.go
index 62cbd6b07..c1529e2a7 100644
--- a/ledger/acctupdates.go
+++ b/ledger/acctupdates.go
@@ -624,6 +624,11 @@ func (aul *accountUpdatesLedgerEvaluator) GenesisHash() crypto.Digest {
return aul.au.ledger.GenesisHash()
}
+// GenesisProto returns the genesis consensus params
+func (aul *accountUpdatesLedgerEvaluator) GenesisProto() config.ConsensusParams {
+ return aul.au.ledger.GenesisProto()
+}
+
// CompactCertVoters returns the top online accounts at round rnd.
func (aul *accountUpdatesLedgerEvaluator) CompactCertVoters(rnd basics.Round) (voters *ledgercore.VotersForRound, err error) {
return aul.au.voters.getVoters(rnd)
diff --git a/ledger/apply/application.go b/ledger/apply/application.go
index 81197e0d0..bd133fbd5 100644
--- a/ledger/apply/application.go
+++ b/ledger/apply/application.go
@@ -298,12 +298,12 @@ func closeOutApplication(balances Balances, sender basics.Address, appIdx basics
}
func checkPrograms(ac *transactions.ApplicationCallTxnFields, evalParams *logic.EvalParams) error {
- err := logic.CheckStateful(ac.ApprovalProgram, *evalParams)
+ err := logic.CheckContract(ac.ApprovalProgram, evalParams)
if err != nil {
return fmt.Errorf("check failed on ApprovalProgram: %v", err)
}
- err = logic.CheckStateful(ac.ClearStateProgram, *evalParams)
+ err = logic.CheckContract(ac.ClearStateProgram, evalParams)
if err != nil {
return fmt.Errorf("check failed on ClearStateProgram: %v", err)
}
@@ -312,7 +312,7 @@ func checkPrograms(ac *transactions.ApplicationCallTxnFields, evalParams *logic.
}
// ApplicationCall evaluates ApplicationCall transaction
-func ApplicationCall(ac transactions.ApplicationCallTxnFields, header transactions.Header, balances Balances, ad *transactions.ApplyData, evalParams *logic.EvalParams, txnCounter uint64) (err error) {
+func ApplicationCall(ac transactions.ApplicationCallTxnFields, header transactions.Header, balances Balances, ad *transactions.ApplyData, gi int, evalParams *logic.EvalParams, txnCounter uint64) (err error) {
defer func() {
// If we are returning a non-nil error, then don't return a
// non-empty EvalDelta. Not required for correctness.
@@ -342,11 +342,7 @@ func ApplicationCall(ac transactions.ApplicationCallTxnFields, header transactio
if err != nil {
return
}
- // No separate config for activating storage in AD because
- // inner transactions can't be turned on without this change.
- if balances.ConsensusParams().MaxInnerTransactions > 0 {
- ad.ApplicationID = appIdx
- }
+ ad.ApplicationID = appIdx
}
// Fetch the application parameters, if they exist
@@ -386,7 +382,7 @@ func ApplicationCall(ac transactions.ApplicationCallTxnFields, header transactio
// If the app still exists, run the ClearStateProgram
if exists {
- pass, evalDelta, err := balances.StatefulEval(*evalParams, appIdx, params.ClearStateProgram)
+ pass, evalDelta, err := balances.StatefulEval(gi, evalParams, appIdx, params.ClearStateProgram)
if err != nil {
// Fail on non-logic eval errors and ignore LogicEvalError errors
if _, ok := err.(ledgercore.LogicEvalError); !ok {
@@ -418,7 +414,7 @@ func ApplicationCall(ac transactions.ApplicationCallTxnFields, header transactio
}
// Execute the Approval program
- approved, evalDelta, err := balances.StatefulEval(*evalParams, appIdx, params.ApprovalProgram)
+ approved, evalDelta, err := balances.StatefulEval(gi, evalParams, appIdx, params.ApprovalProgram)
if err != nil {
return err
}
diff --git a/ledger/apply/application_test.go b/ledger/apply/application_test.go
index 17cced998..a6c83ed8e 100644
--- a/ledger/apply/application_test.go
+++ b/ledger/apply/application_test.go
@@ -225,7 +225,7 @@ func (b *testBalances) DeallocateAsset(addr basics.Address, index basics.AssetIn
return nil
}
-func (b *testBalances) StatefulEval(params logic.EvalParams, aidx basics.AppIndex, program []byte) (passed bool, evalDelta transactions.EvalDelta, err error) {
+func (b *testBalances) StatefulEval(gi int, params *logic.EvalParams, aidx basics.AppIndex, program []byte) (passed bool, evalDelta transactions.EvalDelta, err error) {
return b.pass, b.delta, b.err
}
@@ -258,7 +258,7 @@ func (b *testBalancesPass) Deallocate(addr basics.Address, aidx basics.AppIndex,
return nil
}
-func (b *testBalancesPass) StatefulEval(params logic.EvalParams, aidx basics.AppIndex, program []byte) (passed bool, evalDelta transactions.EvalDelta, err error) {
+func (b *testBalancesPass) StatefulEval(gi int, params *logic.EvalParams, aidx basics.AppIndex, program []byte) (passed bool, evalDelta transactions.EvalDelta, err error) {
return true, b.delta, nil
}
@@ -487,7 +487,7 @@ func TestAppCallApplyCreate(t *testing.T) {
var txnCounter uint64 = 1
var b testBalances
- err := ApplicationCall(ac, h, &b, nil, &ep, txnCounter)
+ err := ApplicationCall(ac, h, &b, nil, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), "ApplicationCall cannot have nil ApplyData")
a.Equal(0, b.put)
@@ -496,7 +496,7 @@ func TestAppCallApplyCreate(t *testing.T) {
b.balances[creator] = basics.AccountData{}
var ad *transactions.ApplyData = &transactions.ApplyData{}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), "max created apps per acct is 0")
a.Equal(0, b.put)
@@ -508,7 +508,7 @@ func TestAppCallApplyCreate(t *testing.T) {
// this test will succeed in creating the app, but then fail
// because the mock balances doesn't update the creators table
// so it will think the app doesn't exist
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), "applications that do not exist")
a.Equal(1, b.put)
@@ -526,7 +526,7 @@ func TestAppCallApplyCreate(t *testing.T) {
cp.AppParams = cloneAppParams(saved.AppParams)
cp.AppLocalStates = cloneAppLocalStates(saved.AppLocalStates)
b.balances[creator] = cp
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), "transaction rejected by ApprovalProgram")
a.Equal(uint64(b.allocatedAppIdx), txnCounter+1)
@@ -549,7 +549,7 @@ func TestAppCallApplyCreate(t *testing.T) {
b.balances[creator] = cp
ac.GlobalStateSchema = basics.StateSchema{NumUint: 1}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(appIdx, b.allocatedAppIdx)
a.Equal(1, b.put)
@@ -564,7 +564,7 @@ func TestAppCallApplyCreate(t *testing.T) {
a.Equal(basics.StateSchema{}, br.AppParams[appIdx].LocalStateSchema)
ac.ExtraProgramPages = 1
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
br = b.putBalances[creator]
a.Equal(uint32(1), br.AppParams[appIdx].ExtraProgramPages)
@@ -606,7 +606,7 @@ func TestAppCallApplyCreateOptIn(t *testing.T) {
gd := map[string]basics.ValueDelta{"uint": {Action: basics.SetUintAction, Uint: 1}}
b.delta = transactions.EvalDelta{GlobalDelta: gd}
- err := ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err := ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(appIdx, b.allocatedAppIdx)
br := b.balances[creator]
@@ -768,7 +768,7 @@ func TestAppCallClearState(t *testing.T) {
b.pass = true
// check app not exist and not opted in
b.balances[sender] = basics.AccountData{}
- err := ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err := ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), "is not currently opted in to app")
a.Equal(0, b.put)
@@ -777,7 +777,7 @@ func TestAppCallClearState(t *testing.T) {
b.balances[sender] = basics.AccountData{
AppLocalStates: map[basics.AppIndex]basics.AppLocalState{appIdx: {}},
}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(1, b.put)
br := b.putBalances[sender]
@@ -795,7 +795,7 @@ func TestAppCallClearState(t *testing.T) {
appIdx: {Schema: basics.StateSchema{NumUint: 10}},
},
}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(1, b.put)
br = b.putBalances[sender]
@@ -821,7 +821,7 @@ func TestAppCallClearState(t *testing.T) {
// one put: to opt out
b.pass = false
b.delta = transactions.EvalDelta{GlobalDelta: nil}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(1, b.put)
br = b.putBalances[sender]
@@ -836,7 +836,7 @@ func TestAppCallClearState(t *testing.T) {
b.pass = true
b.delta = transactions.EvalDelta{GlobalDelta: nil}
b.err = ledgercore.LogicEvalError{Err: fmt.Errorf("test error")}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(1, b.put)
br = b.putBalances[sender]
@@ -851,7 +851,7 @@ func TestAppCallClearState(t *testing.T) {
b.pass = true
b.delta = transactions.EvalDelta{GlobalDelta: nil}
b.err = fmt.Errorf("test error")
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
br = b.putBalances[sender]
a.Equal(0, len(br.AppLocalStates))
@@ -866,7 +866,7 @@ func TestAppCallClearState(t *testing.T) {
b.err = nil
gd := basics.StateDelta{"uint": {Action: basics.SetUintAction, Uint: 1}}
b.delta = transactions.EvalDelta{GlobalDelta: gd}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(1, b.put)
a.Equal(appIdx, b.deAllocatedAppIdx)
@@ -879,7 +879,7 @@ func TestAppCallClearState(t *testing.T) {
b.err = nil
logs := []string{"a"}
b.delta = transactions.EvalDelta{Logs: []string{"a"}}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(transactions.EvalDelta{Logs: logs}, ad.EvalDelta)
}
@@ -926,7 +926,7 @@ func TestAppCallApplyCloseOut(t *testing.T) {
ep.Proto = &proto
b.pass = false
- err := ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err := ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), "transaction rejected by ApprovalProgram")
a.Equal(0, b.put)
@@ -937,7 +937,7 @@ func TestAppCallApplyCloseOut(t *testing.T) {
// check closing on empty sender's balance record
b.pass = true
b.balances[sender] = basics.AccountData{}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), "is not opted in to app")
a.Equal(0, b.put)
@@ -953,7 +953,7 @@ func TestAppCallApplyCloseOut(t *testing.T) {
b.balances[sender] = basics.AccountData{
AppLocalStates: map[basics.AppIndex]basics.AppLocalState{appIdx: {}},
}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(1, b.put)
br = b.putBalances[creator]
@@ -970,7 +970,7 @@ func TestAppCallApplyCloseOut(t *testing.T) {
b.balances[sender] = basics.AccountData{
AppLocalStates: map[basics.AppIndex]basics.AppLocalState{appIdx: {}},
}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(transactions.EvalDelta{Logs: logs}, ad.EvalDelta)
}
@@ -1019,7 +1019,7 @@ func TestAppCallApplyUpdate(t *testing.T) {
ep.Proto = &proto
b.pass = false
- err := ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err := ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), "transaction rejected by ApprovalProgram")
a.Equal(0, b.put)
@@ -1030,7 +1030,7 @@ func TestAppCallApplyUpdate(t *testing.T) {
// check updating on empty sender's balance record - happy case
b.pass = true
b.balances[sender] = basics.AccountData{}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(1, b.put)
br = b.balances[creator]
@@ -1069,7 +1069,7 @@ func TestAppCallApplyUpdate(t *testing.T) {
logs := []string{"a"}
b.delta = transactions.EvalDelta{Logs: []string{"a"}}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(transactions.EvalDelta{Logs: logs}, ad.EvalDelta)
@@ -1094,7 +1094,7 @@ func TestAppCallApplyUpdate(t *testing.T) {
}
b.pass = true
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), fmt.Sprintf("updateApplication %s program too long", test.name))
}
@@ -1110,7 +1110,7 @@ func TestAppCallApplyUpdate(t *testing.T) {
ClearStateProgram: []byte{2},
}
b.pass = true
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
// check extraProgramPages is used and long sum rejected
@@ -1121,7 +1121,7 @@ func TestAppCallApplyUpdate(t *testing.T) {
ClearStateProgram: appr,
}
b.pass = true
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), "updateApplication app programs too long")
@@ -1174,7 +1174,7 @@ func TestAppCallApplyDelete(t *testing.T) {
ep.Proto = &proto
b.pass = false
- err := ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err := ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), "transaction rejected by ApprovalProgram")
a.Equal(0, b.put)
@@ -1189,7 +1189,7 @@ func TestAppCallApplyDelete(t *testing.T) {
b.pass = true
b.balances[sender] = basics.AccountData{}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(appIdx, b.deAllocatedAppIdx)
a.Equal(1, b.put)
@@ -1219,7 +1219,7 @@ func TestAppCallApplyDelete(t *testing.T) {
b.balances[creator] = cp
b.pass = true
b.balances[sender] = basics.AccountData{}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(appIdx, b.deAllocatedAppIdx)
a.Equal(1, b.put)
@@ -1238,7 +1238,7 @@ func TestAppCallApplyDelete(t *testing.T) {
}
logs := []string{"a"}
b.delta = transactions.EvalDelta{Logs: []string{"a"}}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(transactions.EvalDelta{Logs: logs}, ad.EvalDelta)
}
@@ -1279,7 +1279,7 @@ func TestAppCallApplyCreateClearState(t *testing.T) {
b.delta = transactions.EvalDelta{GlobalDelta: gd}
// check creation on empty balance record
- err := ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err := ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.Error(err)
a.Contains(err.Error(), "not currently opted in")
a.Equal(appIdx, b.allocatedAppIdx)
@@ -1329,7 +1329,7 @@ func TestAppCallApplyCreateDelete(t *testing.T) {
b.delta = transactions.EvalDelta{GlobalDelta: gd}
// check creation on empty balance record
- err := ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err := ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(appIdx, b.allocatedAppIdx)
a.Equal(transactions.EvalDelta{GlobalDelta: gd}, ad.EvalDelta)
@@ -1338,7 +1338,7 @@ func TestAppCallApplyCreateDelete(t *testing.T) {
logs := []string{"a"}
b.delta = transactions.EvalDelta{Logs: []string{"a"}}
- err = ApplicationCall(ac, h, &b, ad, &ep, txnCounter)
+ err = ApplicationCall(ac, h, &b, ad, 0, &ep, txnCounter)
a.NoError(err)
a.Equal(transactions.EvalDelta{Logs: logs}, ad.EvalDelta)
diff --git a/ledger/apply/apply.go b/ledger/apply/apply.go
index c7d998268..a73c970b9 100644
--- a/ledger/apply/apply.go
+++ b/ledger/apply/apply.go
@@ -54,7 +54,7 @@ type Balances interface {
// StatefulEval executes a TEAL program in stateful mode on the balances.
// It returns whether the program passed and its error. It also returns
// an EvalDelta that contains the changes made by the program.
- StatefulEval(params logic.EvalParams, aidx basics.AppIndex, program []byte) (passed bool, evalDelta transactions.EvalDelta, err error)
+ StatefulEval(gi int, params *logic.EvalParams, aidx basics.AppIndex, program []byte) (passed bool, evalDelta transactions.EvalDelta, err error)
// Move MicroAlgos from one account to another, doing all necessary overflow checking (convenience method)
// TODO: Does this need to be part of the balances interface, or can it just be implemented here as a function that calls Put and Get?
diff --git a/ledger/apply/asset.go b/ledger/apply/asset.go
index c3192bb84..c6ee4826d 100644
--- a/ledger/apply/asset.go
+++ b/ledger/apply/asset.go
@@ -100,12 +100,7 @@ func AssetConfig(cc transactions.AssetConfigTxnFields, header transactions.Heade
return err
}
- // Record the index used. No separate config for activating
- // storage in AD because inner transactions can't be turned on
- // without this change.
- if balances.ConsensusParams().MaxInnerTransactions > 0 {
- ad.ConfigAsset = newidx
- }
+ ad.ConfigAsset = newidx
// Tell the cow what asset we created
err = balances.AllocateAsset(header.Sender, newidx, true)
diff --git a/ledger/apply/keyreg_test.go b/ledger/apply/keyreg_test.go
index ea0b24195..8a731d6e0 100644
--- a/ledger/apply/keyreg_test.go
+++ b/ledger/apply/keyreg_test.go
@@ -78,7 +78,7 @@ func (balances keyregTestBalances) DeallocateAsset(addr basics.Address, index ba
return nil
}
-func (balances keyregTestBalances) StatefulEval(logic.EvalParams, basics.AppIndex, []byte) (bool, transactions.EvalDelta, error) {
+func (balances keyregTestBalances) StatefulEval(int, *logic.EvalParams, basics.AppIndex, []byte) (bool, transactions.EvalDelta, error) {
return false, transactions.EvalDelta{}, nil
}
diff --git a/ledger/apply/mockBalances_test.go b/ledger/apply/mockBalances_test.go
index 06e8a6d38..d2e121718 100644
--- a/ledger/apply/mockBalances_test.go
+++ b/ledger/apply/mockBalances_test.go
@@ -66,7 +66,7 @@ func (balances mockBalances) DeallocateAsset(addr basics.Address, index basics.A
return nil
}
-func (balances mockBalances) StatefulEval(logic.EvalParams, basics.AppIndex, []byte) (bool, transactions.EvalDelta, error) {
+func (balances mockBalances) StatefulEval(int, *logic.EvalParams, basics.AppIndex, []byte) (bool, transactions.EvalDelta, error) {
return false, transactions.EvalDelta{}, nil
}
diff --git a/ledger/apptxn_test.go b/ledger/apptxn_test.go
deleted file mode 100644
index cbefb140b..000000000
--- a/ledger/apptxn_test.go
+++ /dev/null
@@ -1,1339 +0,0 @@
-// Copyright (C) 2019-2022 Algorand, Inc.
-// This file is part of go-algorand
-//
-// go-algorand is free software: you can redistribute it and/or modify
-// it under the terms of the GNU Affero General Public License as
-// published by the Free Software Foundation, either version 3 of the
-// License, or (at your option) any later version.
-//
-// go-algorand is distributed in the hope that it will be useful,
-// but WITHOUT ANY WARRANTY; without even the implied warranty of
-// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-// GNU Affero General Public License for more details.
-//
-// You should have received a copy of the GNU Affero General Public License
-// along with go-algorand. If not, see <https://www.gnu.org/licenses/>.
-
-package ledger
-
-import (
- "fmt"
- "testing"
-
- "github.com/stretchr/testify/require"
-
- "github.com/algorand/go-algorand/agreement"
- "github.com/algorand/go-algorand/config"
- "github.com/algorand/go-algorand/crypto"
- "github.com/algorand/go-algorand/data/basics"
- "github.com/algorand/go-algorand/data/bookkeeping"
- "github.com/algorand/go-algorand/data/transactions"
- "github.com/algorand/go-algorand/data/txntest"
- "github.com/algorand/go-algorand/ledger/internal"
- "github.com/algorand/go-algorand/ledger/ledgercore"
- ledgertesting "github.com/algorand/go-algorand/ledger/testing"
- "github.com/algorand/go-algorand/logging"
- "github.com/algorand/go-algorand/protocol"
- "github.com/algorand/go-algorand/test/partitiontest"
-)
-
-// main wraps up some TEAL source in a header and footer so that it is
-// an app that does nothing at create time, but otherwise runs source,
-// then approves, if the source avoids panicing and leaves the stack
-// empty.
-func main(source string) string {
- return fmt.Sprintf(`txn ApplicationID
- bz end
- %s
- end: int 1`, source)
-}
-
-// newTestLedger creates a in memory Ledger that is as realistic as
-// possible. It has Rewards and FeeSink properly configured.
-func newTestLedger(t testing.TB, balances bookkeeping.GenesisBalances) *Ledger {
- var genHash crypto.Digest
- crypto.RandBytes(genHash[:])
- genBlock, err := bookkeeping.MakeGenesisBlock(protocol.ConsensusFuture, balances, "test", genHash)
- require.NoError(t, err)
- require.False(t, genBlock.FeeSink.IsZero())
- require.False(t, genBlock.RewardsPool.IsZero())
- dbName := fmt.Sprintf("%s.%d", t.Name(), crypto.RandUint64())
- cfg := config.GetDefaultLocal()
- cfg.Archival = true
- l, err := OpenLedger(logging.Base(), dbName, true, ledgercore.InitState{
- Block: genBlock,
- Accounts: balances.Balances,
- GenesisHash: genHash,
- }, cfg)
- require.NoError(t, err)
- return l
-}
-
-// nextBlock begins evaluation of a new block, after ledger creation or endBlock()
-func (ledger *Ledger) nextBlock(t testing.TB) *internal.BlockEvaluator {
- rnd := ledger.Latest()
- hdr, err := ledger.BlockHdr(rnd)
- require.NoError(t, err)
-
- nextHdr := bookkeeping.MakeBlock(hdr).BlockHeader
- eval, err := ledger.StartEvaluator(nextHdr, 0, 0)
- require.NoError(t, err)
- return eval
-}
-
-// endBlock completes the block being created, returns the ValidatedBlock for inspection
-func (ledger *Ledger) endBlock(t testing.TB, eval testingEvaluator) *ledgercore.ValidatedBlock {
- validatedBlock, err := eval.BlockEvaluator.GenerateBlock()
- require.NoError(t, err)
- err = ledger.AddValidatedBlock(*validatedBlock, agreement.Certificate{})
- require.NoError(t, err)
- return validatedBlock
-}
-
-// lookup gets the current accountdata for an address
-func (ledger *Ledger) lookup(t testing.TB, addr basics.Address) basics.AccountData {
- rnd := ledger.Latest()
- ad, err := ledger.Lookup(rnd, addr)
- require.NoError(t, err)
- return ad
-}
-
-// micros gets the current microAlgo balance for an address
-func (ledger *Ledger) micros(t testing.TB, addr basics.Address) uint64 {
- return ledger.lookup(t, addr).MicroAlgos.Raw
-}
-
-// asa gets the current balance and optin status for some asa for an address
-func (ledger *Ledger) asa(t testing.TB, addr basics.Address, asset basics.AssetIndex) (uint64, bool) {
- if holding, ok := ledger.lookup(t, addr).Assets[asset]; ok {
- return holding.Amount, true
- }
- return 0, false
-}
-
-// asaParams gets the asset params for a given asa index
-func (ledger *Ledger) asaParams(t testing.TB, asset basics.AssetIndex) (basics.AssetParams, error) {
- creator, ok, err := ledger.GetCreator(basics.CreatableIndex(asset), basics.AssetCreatable)
- if err != nil {
- return basics.AssetParams{}, err
- }
- if !ok {
- return basics.AssetParams{}, fmt.Errorf("no asset (%d)", asset)
- }
- if params, ok := ledger.lookup(t, creator).AssetParams[asset]; ok {
- return params, nil
- }
- return basics.AssetParams{}, fmt.Errorf("bad lookup (%d)", asset)
-}
-
-type testingEvaluator struct {
- *internal.BlockEvaluator
- ledger *Ledger
-}
-
-func (eval *testingEvaluator) fillDefaults(txn *txntest.Txn) {
- if txn.GenesisHash.IsZero() {
- txn.GenesisHash = eval.ledger.GenesisHash()
- }
- if txn.FirstValid == 0 {
- txn.FirstValid = eval.Round()
- }
- txn.FillDefaults(eval.ledger.genesisProto)
-}
-
-func (eval *testingEvaluator) txn(t testing.TB, txn *txntest.Txn, problem ...string) {
- t.Helper()
- eval.fillDefaults(txn)
- stxn := txn.SignedTxn()
- err := eval.TestTransactionGroup([]transactions.SignedTxn{stxn})
- if err != nil {
- if len(problem) == 1 {
- require.Contains(t, err.Error(), problem[0])
- } else {
- require.NoError(t, err) // Will obviously fail
- }
- return
- }
- err = eval.Transaction(stxn, transactions.ApplyData{})
- if err != nil {
- if len(problem) == 1 {
- require.Contains(t, err.Error(), problem[0])
- } else {
- require.NoError(t, err) // Will obviously fail
- }
- return
- }
- require.Len(t, problem, 0)
-}
-
-func (eval *testingEvaluator) txns(t testing.TB, txns ...*txntest.Txn) {
- t.Helper()
- for _, txn := range txns {
- eval.txn(t, txn)
- }
-}
-
-func (eval *testingEvaluator) txgroup(t testing.TB, txns ...*txntest.Txn) error {
- t.Helper()
- for _, txn := range txns {
- eval.fillDefaults(txn)
- }
- txgroup := txntest.SignedTxns(txns...)
-
- err := eval.TestTransactionGroup(txgroup)
- if err != nil {
- return err
- }
-
- err = eval.TransactionGroup(transactions.WrapSignedTxnsWithAD(txgroup))
- return err
-}
-
-// TestPayAction ensures a pay in teal affects balances
-func TestPayAction(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- create := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApprovalProgram: main(`
- itxn_begin
- int pay
- itxn_field TypeEnum
- int 5000
- itxn_field Amount
- txn Accounts 1
- itxn_field Receiver
- itxn_submit
-`),
- }
-
- ai := basics.AppIndex(1)
- fund := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: ai.Address(),
- Amount: 200000, // account min balance, plus fees
- }
-
- payout1 := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: ai,
- Accounts: []basics.Address{addrs[1]}, // pay self
- }
-
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &create, &fund, &payout1)
- vb := l.endBlock(t, eval)
-
- // AD contains expected appIndex
- require.Equal(t, ai, vb.Block().Payset[0].ApplyData.ApplicationID)
-
- ad0 := l.micros(t, addrs[0])
- ad1 := l.micros(t, addrs[1])
- app := l.micros(t, ai.Address())
-
- // create(1000) and fund(1000 + 200000)
- require.Equal(t, uint64(202000), genBalances.Balances[addrs[0]].MicroAlgos.Raw-ad0)
- // paid 5000, but 1000 fee
- require.Equal(t, uint64(4000), ad1-genBalances.Balances[addrs[1]].MicroAlgos.Raw)
- // app still has 194000 (paid out 5000, and paid fee to do it)
- require.Equal(t, uint64(194000), app)
-
- // Build up Residue in RewardsState so it's ready to pay
- for i := 1; i < 10; i++ {
- eval = testingEvaluator{l.nextBlock(t), l}
- l.endBlock(t, eval)
- }
-
- eval = testingEvaluator{l.nextBlock(t), l}
- payout2 := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: ai,
- Accounts: []basics.Address{addrs[2]}, // pay other
- }
- eval.txn(t, &payout2)
- // confirm that modifiedAccounts can see account in inner txn
- found := false
- vb = l.endBlock(t, eval)
-
- deltas := vb.Delta()
- for _, addr := range deltas.Accts.ModifiedAccounts() {
- if addr == addrs[2] {
- found = true
- }
- }
- require.True(t, found)
-
- payInBlock := vb.Block().Payset[0]
- rewards := payInBlock.ApplyData.SenderRewards.Raw
- require.Greater(t, rewards, uint64(2000)) // some biggish number
- inners := payInBlock.ApplyData.EvalDelta.InnerTxns
- require.Len(t, inners, 1)
-
- // addr[2] is going to get the same rewards as addr[1], who
- // originally sent the top-level txn. Both had their algo balance
- // touched and has very nearly the same balance.
- require.Equal(t, rewards, inners[0].ReceiverRewards.Raw)
- // app gets none, because it has less than 1A
- require.Equal(t, uint64(0), inners[0].SenderRewards.Raw)
-
- ad1 = l.micros(t, addrs[1])
- ad2 := l.micros(t, addrs[2])
- app = l.micros(t, ai.Address())
-
- // paid 5000, in first payout (only), but paid 1000 fee in each payout txn
- require.Equal(t, rewards+3000, ad1-genBalances.Balances[addrs[1]].MicroAlgos.Raw)
- // app still has 188000 (paid out 10000, and paid 2k fees to do it)
- // no rewards because owns less than an algo
- require.Equal(t, uint64(200000)-10000-2000, app)
-
- // paid 5000 by payout2, never paid any fees, got same rewards
- require.Equal(t, rewards+uint64(5000), ad2-genBalances.Balances[addrs[2]].MicroAlgos.Raw)
-
- // Now fund the app account much more, so we can confirm it gets rewards.
- tenkalgos := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: ai.Address(),
- Amount: 10 * 1000 * 1000000, // account min balance, plus fees
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &tenkalgos)
- l.endBlock(t, eval)
- beforepay := l.micros(t, ai.Address())
-
- // Build up Residue in RewardsState so it's ready to pay again
- for i := 1; i < 10; i++ {
- eval = testingEvaluator{l.nextBlock(t), l}
- l.endBlock(t, eval)
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, payout2.Noted("2"))
- vb = l.endBlock(t, eval)
-
- afterpay := l.micros(t, ai.Address())
-
- payInBlock = vb.Block().Payset[0]
- inners = payInBlock.ApplyData.EvalDelta.InnerTxns
- require.Len(t, inners, 1)
-
- appreward := inners[0].SenderRewards.Raw
- require.Greater(t, appreward, uint64(1000))
-
- require.Equal(t, beforepay+appreward-5000-1000, afterpay)
-}
-
-// TestAxferAction ensures axfers in teal have the intended effects
-func TestAxferAction(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- asa := txntest.Txn{
- Type: "acfg",
- Sender: addrs[0],
- AssetParams: basics.AssetParams{
- Total: 1000000,
- Decimals: 3,
- UnitName: "oz",
- AssetName: "Gold",
- URL: "https://gold.rush/",
- },
- }
-
- app := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApprovalProgram: main(`
- itxn_begin
- int axfer
- itxn_field TypeEnum
- txn Assets 0
- itxn_field XferAsset
-
- txn ApplicationArgs 0
- byte "optin"
- ==
- bz withdraw
- // let AssetAmount default to 0
- global CurrentApplicationAddress
- itxn_field AssetReceiver
- b submit
-withdraw:
- txn ApplicationArgs 0
- byte "close"
- ==
- bz noclose
- txn Accounts 1
- itxn_field AssetCloseTo
- b skipamount
-noclose: int 10000
- itxn_field AssetAmount
-skipamount:
- txn Accounts 1
- itxn_field AssetReceiver
-submit: itxn_submit
-`),
- }
-
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &asa, &app)
- vb := l.endBlock(t, eval)
-
- asaIndex := basics.AssetIndex(1)
- require.Equal(t, asaIndex, vb.Block().Payset[0].ApplyData.ConfigAsset)
- appIndex := basics.AppIndex(2)
- require.Equal(t, appIndex, vb.Block().Payset[1].ApplyData.ApplicationID)
-
- fund := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: appIndex.Address(),
- Amount: 300000, // account min balance, optin min balance, plus fees
- // stay under 1M, to avoid rewards complications
- }
-
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &fund)
- l.endBlock(t, eval)
-
- fundgold := txntest.Txn{
- Type: "axfer",
- Sender: addrs[0],
- XferAsset: asaIndex,
- AssetReceiver: appIndex.Address(),
- AssetAmount: 20000,
- }
-
- // Fail, because app account is not opted in.
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &fundgold, fmt.Sprintf("asset %d missing", asaIndex))
- l.endBlock(t, eval)
-
- amount, in := l.asa(t, appIndex.Address(), asaIndex)
- require.False(t, in)
- require.Equal(t, amount, uint64(0))
-
- optin := txntest.Txn{
- Type: "appl",
- ApplicationID: appIndex,
- Sender: addrs[0],
- ApplicationArgs: [][]byte{[]byte("optin")},
- ForeignAssets: []basics.AssetIndex{asaIndex},
- }
-
- // Tell the app to opt itself in.
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &optin)
- l.endBlock(t, eval)
-
- amount, in = l.asa(t, appIndex.Address(), asaIndex)
- require.True(t, in)
- require.Equal(t, amount, uint64(0))
-
- // Now, suceed, because opted in.
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &fundgold)
- l.endBlock(t, eval)
-
- amount, in = l.asa(t, appIndex.Address(), asaIndex)
- require.True(t, in)
- require.Equal(t, amount, uint64(20000))
-
- withdraw := txntest.Txn{
- Type: "appl",
- ApplicationID: appIndex,
- Sender: addrs[0],
- ApplicationArgs: [][]byte{[]byte("withdraw")},
- ForeignAssets: []basics.AssetIndex{asaIndex},
- Accounts: []basics.Address{addrs[0]},
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &withdraw)
- l.endBlock(t, eval)
-
- amount, in = l.asa(t, appIndex.Address(), asaIndex)
- require.True(t, in)
- require.Equal(t, amount, uint64(10000))
-
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, withdraw.Noted("2"))
- l.endBlock(t, eval)
-
- amount, in = l.asa(t, appIndex.Address(), asaIndex)
- require.True(t, in) // Zero left, but still opted in
- require.Equal(t, amount, uint64(0))
-
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, withdraw.Noted("3"), "underflow on subtracting")
- l.endBlock(t, eval)
-
- amount, in = l.asa(t, appIndex.Address(), asaIndex)
- require.True(t, in) // Zero left, but still opted in
- require.Equal(t, amount, uint64(0))
-
- close := txntest.Txn{
- Type: "appl",
- ApplicationID: appIndex,
- Sender: addrs[0],
- ApplicationArgs: [][]byte{[]byte("close")},
- ForeignAssets: []basics.AssetIndex{asaIndex},
- Accounts: []basics.Address{addrs[0]},
- }
-
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &close)
- l.endBlock(t, eval)
-
- amount, in = l.asa(t, appIndex.Address(), asaIndex)
- require.False(t, in) // Zero left, not opted in
- require.Equal(t, amount, uint64(0))
-
- // Now, fail again, opted out
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, fundgold.Noted("2"), fmt.Sprintf("asset %d missing", asaIndex))
- l.endBlock(t, eval)
-
- // Do it all again, so we can test closeTo when we have a non-zero balance
- // Tell the app to opt itself in.
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, optin.Noted("a"), fundgold.Noted("a"))
- l.endBlock(t, eval)
-
- amount, _ = l.asa(t, appIndex.Address(), asaIndex)
- require.Equal(t, uint64(20000), amount)
- left, _ := l.asa(t, addrs[0], asaIndex)
-
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, close.Noted("a"))
- l.endBlock(t, eval)
-
- amount, _ = l.asa(t, appIndex.Address(), asaIndex)
- require.Equal(t, uint64(0), amount)
- back, _ := l.asa(t, addrs[0], asaIndex)
- require.Equal(t, uint64(20000), back-left)
-}
-
-// TestClawbackAction ensures an app address can act as clawback address.
-func TestClawbackAction(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- asaIndex := basics.AssetIndex(1)
- appIndex := basics.AppIndex(2)
-
- asa := txntest.Txn{
- Type: "acfg",
- Sender: addrs[0],
- AssetParams: basics.AssetParams{
- Total: 1000000,
- Decimals: 3,
- UnitName: "oz",
- AssetName: "Gold",
- URL: "https://gold.rush/",
- Clawback: appIndex.Address(),
- },
- }
-
- app := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApprovalProgram: main(`
- itxn_begin
-
- int axfer
- itxn_field TypeEnum
-
- txn Assets 0
- itxn_field XferAsset
-
- txn Accounts 1
- itxn_field AssetSender
-
- txn Accounts 2
- itxn_field AssetReceiver
-
- int 1000
- itxn_field AssetAmount
-
- itxn_submit
-`),
- }
-
- optin := txntest.Txn{
- Type: "axfer",
- Sender: addrs[1],
- AssetReceiver: addrs[1],
- XferAsset: asaIndex,
- }
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &asa, &app, &optin)
- vb := l.endBlock(t, eval)
-
- require.Equal(t, asaIndex, vb.Block().Payset[0].ApplyData.ConfigAsset)
- require.Equal(t, appIndex, vb.Block().Payset[1].ApplyData.ApplicationID)
-
- bystander := addrs[2] // Has no authority of its own
- overpay := txntest.Txn{
- Type: "pay",
- Sender: bystander,
- Receiver: bystander,
- Fee: 2000, // Overpay fee so that app account can be unfunded
- }
- clawmove := txntest.Txn{
- Type: "appl",
- Sender: bystander,
- ApplicationID: appIndex,
- ForeignAssets: []basics.AssetIndex{asaIndex},
- Accounts: []basics.Address{addrs[0], addrs[1]},
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txgroup(t, &overpay, &clawmove)
- l.endBlock(t, eval)
-
- amount, _ := l.asa(t, addrs[1], asaIndex)
- require.Equal(t, amount, uint64(1000))
-}
-
-// TestRekeyAction ensures an app can transact for a rekeyed account
-func TestRekeyAction(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- appIndex := basics.AppIndex(1)
- ezpayer := txntest.Txn{
- Type: "appl",
- Sender: addrs[5],
- ApprovalProgram: main(`
- itxn_begin
- int pay
- itxn_field TypeEnum
- int 5000
- itxn_field Amount
- txn Accounts 1
- itxn_field Sender
- txn Accounts 2
- itxn_field Receiver
- txn NumAccounts
- int 3
- ==
- bz skipclose
- txn Accounts 3
- itxn_field CloseRemainderTo
-skipclose:
- itxn_submit
-`),
- }
-
- rekey := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: addrs[0],
- RekeyTo: appIndex.Address(),
- }
-
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &ezpayer, &rekey)
- l.endBlock(t, eval)
-
- useacct := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: appIndex,
- Accounts: []basics.Address{addrs[0], addrs[2]}, // pay 2 from 0 (which was rekeyed)
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &useacct)
- l.endBlock(t, eval)
-
- // App was never funded (didn't spend from it's own acct)
- require.Equal(t, uint64(0), l.micros(t, basics.AppIndex(1).Address()))
- // addrs[2] got paid
- require.Equal(t, uint64(5000), l.micros(t, addrs[2])-l.micros(t, addrs[6]))
- // addrs[0] paid 5k + rekey fee + inner txn fee
- require.Equal(t, uint64(7000), l.micros(t, addrs[6])-l.micros(t, addrs[0]))
-
- baduse := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: appIndex,
- Accounts: []basics.Address{addrs[2], addrs[0]}, // pay 0 from 2
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &baduse, "unauthorized")
- l.endBlock(t, eval)
-
- // Now, we close addrs[0], which wipes its rekey status. Reopen
- // it, and make sure the app can't spend.
-
- close := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: appIndex,
- Accounts: []basics.Address{addrs[0], addrs[2], addrs[3]}, // close to 3
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &close)
- l.endBlock(t, eval)
-
- require.Equal(t, uint64(0), l.micros(t, addrs[0]))
-
- payback := txntest.Txn{
- Type: "pay",
- Sender: addrs[3],
- Receiver: addrs[0],
- Amount: 10_000_000,
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &payback)
- l.endBlock(t, eval)
-
- require.Equal(t, uint64(10_000_000), l.micros(t, addrs[0]))
-
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, useacct.Noted("2"), "unauthorized")
- l.endBlock(t, eval)
-}
-
-// TestRekeyActionCloseAccount ensures closing and reopening a rekeyed account in a single app call
-// properly removes the app as an authorizer for the account
-func TestRekeyActionCloseAccount(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- appIndex := basics.AppIndex(1)
- create := txntest.Txn{
- Type: "appl",
- Sender: addrs[5],
- ApprovalProgram: main(`
- // close account 1
- itxn_begin
- int pay
- itxn_field TypeEnum
- txn Accounts 1
- itxn_field Sender
- txn Accounts 2
- itxn_field CloseRemainderTo
- itxn_submit
-
- // reopen account 1
- itxn_begin
- int pay
- itxn_field TypeEnum
- int 5000
- itxn_field Amount
- txn Accounts 1
- itxn_field Receiver
- itxn_submit
- // send from account 1 again (should fail because closing an account erases rekeying)
- itxn_begin
- int pay
- itxn_field TypeEnum
- int 1
- itxn_field Amount
- txn Accounts 1
- itxn_field Sender
- txn Accounts 2
- itxn_field Receiver
- itxn_submit
-`),
- }
-
- rekey := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: addrs[0],
- RekeyTo: appIndex.Address(),
- }
-
- fund := txntest.Txn{
- Type: "pay",
- Sender: addrs[1],
- Receiver: appIndex.Address(),
- Amount: 1_000_000,
- }
-
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &create, &rekey, &fund)
- l.endBlock(t, eval)
-
- useacct := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: appIndex,
- Accounts: []basics.Address{addrs[0], addrs[2]},
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &useacct, "unauthorized")
- l.endBlock(t, eval)
-}
-
-// TestDuplicatePayAction shows two pays with same parameters can be done as inner tarnsactions
-func TestDuplicatePayAction(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- appIndex := basics.AppIndex(1)
- create := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApprovalProgram: main(`
- itxn_begin
- int pay
- itxn_field TypeEnum
- int 5000
- itxn_field Amount
- txn Accounts 1
- itxn_field Receiver
- itxn_submit
- itxn_begin
- int pay
- itxn_field TypeEnum
- int 5000
- itxn_field Amount
- txn Accounts 1
- itxn_field Receiver
- itxn_submit
-`),
- }
-
- fund := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: appIndex.Address(),
- Amount: 200000, // account min balance, plus fees
- }
-
- paytwice := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: appIndex,
- Accounts: []basics.Address{addrs[1]}, // pay self
- }
-
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &create, &fund, &paytwice, create.Noted("in same block"))
- vb := l.endBlock(t, eval)
-
- require.Equal(t, appIndex, vb.Block().Payset[0].ApplyData.ApplicationID)
- require.Equal(t, 4, len(vb.Block().Payset))
- // create=1, fund=2, payTwice=3,4,5
- require.Equal(t, basics.AppIndex(6), vb.Block().Payset[3].ApplyData.ApplicationID)
-
- ad0 := l.micros(t, addrs[0])
- ad1 := l.micros(t, addrs[1])
- app := l.micros(t, appIndex.Address())
-
- // create(1000) and fund(1000 + 200000), extra create (1000)
- require.Equal(t, 203000, int(genBalances.Balances[addrs[0]].MicroAlgos.Raw-ad0))
- // paid 10000, but 1000 fee on tx
- require.Equal(t, 9000, int(ad1-genBalances.Balances[addrs[1]].MicroAlgos.Raw))
- // app still has 188000 (paid out 10000, and paid 2 x fee to do it)
- require.Equal(t, 188000, int(app))
-
- // Now create another app, and see if it gets the index we expect.
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, create.Noted("again"))
- vb = l.endBlock(t, eval)
-
- // create=1, fund=2, payTwice=3,4,5, insameblock=6
- require.Equal(t, basics.AppIndex(7), vb.Block().Payset[0].ApplyData.ApplicationID)
-}
-
-// TestInnerTxCount ensures that inner transactions increment the TxnCounter
-func TestInnerTxnCount(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- create := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApprovalProgram: main(`
- itxn_begin
- int pay
- itxn_field TypeEnum
- int 5000
- itxn_field Amount
- txn Accounts 1
- itxn_field Receiver
- itxn_submit
-`),
- }
-
- fund := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: basics.AppIndex(1).Address(),
- Amount: 200000, // account min balance, plus fees
- }
-
- payout1 := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: basics.AppIndex(1),
- Accounts: []basics.Address{addrs[1]}, // pay self
- }
-
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &create, &fund)
- vb := l.endBlock(t, eval)
- require.Equal(t, 2, int(vb.Block().TxnCounter))
-
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &payout1)
- vb = l.endBlock(t, eval)
- require.Equal(t, 4, int(vb.Block().TxnCounter))
-}
-
-// TestAcfgAction ensures assets can be created and configured in teal
-func TestAcfgAction(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- appIndex := basics.AppIndex(1)
- app := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApprovalProgram: main(`
- itxn_begin
- int acfg
- itxn_field TypeEnum
-
- txn ApplicationArgs 0
- byte "create"
- ==
- bz manager
- int 1000000
- itxn_field ConfigAssetTotal
- int 3
- itxn_field ConfigAssetDecimals
- byte "oz"
- itxn_field ConfigAssetUnitName
- byte "Gold"
- itxn_field ConfigAssetName
- byte "https://gold.rush/"
- itxn_field ConfigAssetURL
-
- global CurrentApplicationAddress
- dup
- dup2
- itxn_field ConfigAssetManager
- itxn_field ConfigAssetReserve
- itxn_field ConfigAssetFreeze
- itxn_field ConfigAssetClawback
- b submit
-manager:
- // Put the current values in the itxn
- txn Assets 0
- asset_params_get AssetManager
- assert // exists
- itxn_field ConfigAssetManager
-
- txn Assets 0
- asset_params_get AssetReserve
- assert // exists
- itxn_field ConfigAssetReserve
-
- txn Assets 0
- asset_params_get AssetFreeze
- assert // exists
- itxn_field ConfigAssetFreeze
-
- txn Assets 0
- asset_params_get AssetClawback
- assert // exists
- itxn_field ConfigAssetClawback
-
-
- txn ApplicationArgs 0
- byte "manager"
- ==
- bz reserve
- txn Assets 0
- itxn_field ConfigAsset
- txn ApplicationArgs 1
- itxn_field ConfigAssetManager
- b submit
-reserve:
- txn ApplicationArgs 0
- byte "reserve"
- ==
- bz freeze
- txn Assets 0
- itxn_field ConfigAsset
- txn ApplicationArgs 1
- itxn_field ConfigAssetReserve
- b submit
-freeze:
- txn ApplicationArgs 0
- byte "freeze"
- ==
- bz clawback
- txn Assets 0
- itxn_field ConfigAsset
- txn ApplicationArgs 1
- itxn_field ConfigAssetFreeze
- b submit
-clawback:
- txn ApplicationArgs 0
- byte "clawback"
- ==
- bz error
- txn Assets 0
- itxn_field ConfigAsset
- txn ApplicationArgs 1
- itxn_field ConfigAssetClawback
- b submit
-error: err
-submit: itxn_submit
-`),
- }
-
- fund := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: appIndex.Address(),
- Amount: 200_000, // exactly account min balance + one asset
- }
-
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &app, &fund)
- l.endBlock(t, eval)
-
- createAsa := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: appIndex,
- ApplicationArgs: [][]byte{[]byte("create")},
- }
-
- eval = testingEvaluator{l.nextBlock(t), l}
- // Can't create an asset if you have exactly 200,000 and need to pay fee
- eval.txn(t, &createAsa, "balance 199000 below min 200000")
- // fund it some more and try again
- eval.txns(t, fund.Noted("more!"), &createAsa)
- vb := l.endBlock(t, eval)
-
- asaIndex := vb.Block().Payset[1].EvalDelta.InnerTxns[0].ConfigAsset
- require.Equal(t, basics.AssetIndex(5), asaIndex)
-
- asaParams, err := l.asaParams(t, basics.AssetIndex(5))
- require.NoError(t, err)
-
- require.Equal(t, 1_000_000, int(asaParams.Total))
- require.Equal(t, 3, int(asaParams.Decimals))
- require.Equal(t, "oz", asaParams.UnitName)
- require.Equal(t, "Gold", asaParams.AssetName)
- require.Equal(t, "https://gold.rush/", asaParams.URL)
-
- require.Equal(t, appIndex.Address(), asaParams.Manager)
-
- for _, a := range []string{"reserve", "freeze", "clawback", "manager"} {
- check := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: appIndex,
- ApplicationArgs: [][]byte{[]byte(a), []byte("junkjunkjunkjunkjunkjunkjunkjunk")},
- ForeignAssets: []basics.AssetIndex{asaIndex},
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- t.Log(a)
- eval.txn(t, &check)
- l.endBlock(t, eval)
- }
- // Not the manager anymore so this won't work
- nodice := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: appIndex,
- ApplicationArgs: [][]byte{[]byte("freeze"), []byte("junkjunkjunkjunkjunkjunkjunkjunk")},
- ForeignAssets: []basics.AssetIndex{asaIndex},
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &nodice, "this transaction should be issued by the manager")
- l.endBlock(t, eval)
-
-}
-
-// TestAsaDuringInit ensures an ASA can be made while initilizing an
-// app. In practice, this is impossible, because you would not be
-// able to prefund the account - you don't know the app id. But here
-// we can know, so it helps exercise txncounter changes.
-func TestAsaDuringInit(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- appIndex := basics.AppIndex(2)
- prefund := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: appIndex.Address(),
- Amount: 300000, // plenty for min balances, fees
- }
-
- app := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApprovalProgram: `
- itxn_begin
- int acfg
- itxn_field TypeEnum
- int 1000000
- itxn_field ConfigAssetTotal
- byte "oz"
- itxn_field ConfigAssetUnitName
- byte "Gold"
- itxn_field ConfigAssetName
- itxn_submit
- itxn CreatedAssetID
- int 3
- ==
- itxn CreatedApplicationID
- int 0
- ==
- &&
- itxn NumLogs
- int 0
- ==
- &&
-`,
- }
-
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &prefund, &app)
- vb := l.endBlock(t, eval)
-
- require.Equal(t, appIndex, vb.Block().Payset[1].ApplicationID)
-
- asaIndex := vb.Block().Payset[1].EvalDelta.InnerTxns[0].ConfigAsset
- require.Equal(t, basics.AssetIndex(3), asaIndex)
-}
-
-func TestRekey(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- app := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApprovalProgram: main(`
- itxn_begin
- int pay
- itxn_field TypeEnum
- int 1
- itxn_field Amount
- global CurrentApplicationAddress
- itxn_field Receiver
- int 31
- bzero
- byte 0x01
- concat
- itxn_field RekeyTo
- itxn_submit
-`),
- }
-
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &app)
- vb := l.endBlock(t, eval)
- appIndex := vb.Block().Payset[0].ApplicationID
- require.Equal(t, basics.AppIndex(1), appIndex)
-
- fund := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: appIndex.Address(),
- Amount: 1_000_000,
- }
- rekey := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: appIndex,
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &fund, &rekey)
- eval.txn(t, rekey.Noted("2"), "unauthorized")
- l.endBlock(t, eval)
-
-}
-
-func TestNote(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- app := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApprovalProgram: main(`
- itxn_begin
- int pay
- itxn_field TypeEnum
- int 0
- itxn_field Amount
- global CurrentApplicationAddress
- itxn_field Receiver
- byte "abcdefghijklmnopqrstuvwxyz01234567890"
- itxn_field Note
- itxn_submit
-`),
- }
-
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &app)
- vb := l.endBlock(t, eval)
- appIndex := vb.Block().Payset[0].ApplicationID
- require.Equal(t, basics.AppIndex(1), appIndex)
-
- fund := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: appIndex.Address(),
- Amount: 1_000_000,
- }
- note := txntest.Txn{
- Type: "appl",
- Sender: addrs[1],
- ApplicationID: appIndex,
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &fund, &note)
- vb = l.endBlock(t, eval)
- alphabet := vb.Block().Payset[1].EvalDelta.InnerTxns[0].Txn.Note
- require.Equal(t, "abcdefghijklmnopqrstuvwxyz01234567890", string(alphabet))
-}
-
-func TestKeyreg(t *testing.T) {
- partitiontest.PartitionTest(t)
-
- genBalances, addrs, _ := ledgertesting.NewTestGenesis()
- l := newTestLedger(t, genBalances)
- defer l.Close()
-
- app := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApprovalProgram: main(`
- txn ApplicationArgs 0
- byte "pay"
- ==
- bz nonpart
- itxn_begin
- int pay
- itxn_field TypeEnum
- int 1
- itxn_field Amount
- txn Sender
- itxn_field Receiver
- itxn_submit
- int 1
- return
-nonpart:
- itxn_begin
- int keyreg
- itxn_field TypeEnum
- int 1
- itxn_field Nonparticipation
- itxn_submit
-`),
- }
-
- // Create the app
- eval := testingEvaluator{l.nextBlock(t), l}
- eval.txns(t, &app)
- vb := l.endBlock(t, eval)
- appIndex := vb.Block().Payset[0].ApplicationID
- require.Equal(t, basics.AppIndex(1), appIndex)
-
- // Give the app a lot of money
- fund := txntest.Txn{
- Type: "pay",
- Sender: addrs[0],
- Receiver: appIndex.Address(),
- Amount: 1_000_000_000,
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &fund)
- l.endBlock(t, eval)
-
- require.Equal(t, 1_000_000_000, int(l.micros(t, appIndex.Address())))
-
- // Build up Residue in RewardsState so it's ready to pay
- for i := 1; i < 10; i++ {
- eval := testingEvaluator{l.nextBlock(t), l}
- l.endBlock(t, eval)
- }
-
- // pay a little
- pay := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApplicationID: appIndex,
- ApplicationArgs: [][]byte{[]byte("pay")},
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &pay)
- l.endBlock(t, eval)
- // 2000 was earned in rewards (- 1000 fee, -1 pay)
- require.Equal(t, 1_000_000_999, int(l.micros(t, appIndex.Address())))
-
- // Go nonpart
- nonpart := txntest.Txn{
- Type: "appl",
- Sender: addrs[0],
- ApplicationID: appIndex,
- ApplicationArgs: [][]byte{[]byte("nonpart")},
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, &nonpart)
- l.endBlock(t, eval)
- require.Equal(t, 999_999_999, int(l.micros(t, appIndex.Address())))
-
- // Build up Residue in RewardsState so it's ready to pay AGAIN
- // But expect no rewards
- for i := 1; i < 100; i++ {
- eval := testingEvaluator{l.nextBlock(t), l}
- l.endBlock(t, eval)
- }
- eval = testingEvaluator{l.nextBlock(t), l}
- eval.txn(t, pay.Noted("again"))
- eval.txn(t, nonpart.Noted("again"), "cannot change online/offline")
- l.endBlock(t, eval)
- // Ppaid fee and 1. Did not get rewards
- require.Equal(t, 999_998_998, int(l.micros(t, appIndex.Address())))
-}
diff --git a/ledger/evalindexer.go b/ledger/evalindexer.go
index 7cdd36d5f..78030ede7 100644
--- a/ledger/evalindexer.go
+++ b/ledger/evalindexer.go
@@ -67,6 +67,7 @@ type Creatable struct {
type indexerLedgerConnector struct {
il indexerLedgerForEval
genesisHash crypto.Digest
+ genesisProto config.ConsensusParams
latestRound basics.Round
roundResources EvalForIndexerResources
}
@@ -147,6 +148,11 @@ func (l indexerLedgerConnector) GenesisHash() crypto.Digest {
return l.genesisHash
}
+// GenesisProto is part of LedgerForEvaluator interface.
+func (l indexerLedgerConnector) GenesisProto() config.ConsensusParams {
+ return l.genesisProto
+}
+
// Totals is part of LedgerForEvaluator interface.
func (l indexerLedgerConnector) LatestTotals() (rnd basics.Round, totals ledgercore.AccountTotals, err error) {
totals, err = l.il.LatestTotals()
@@ -160,10 +166,11 @@ func (l indexerLedgerConnector) CompactCertVoters(_ basics.Round) (*ledgercore.V
return nil, errors.New("CompactCertVoters() not implemented")
}
-func makeIndexerLedgerConnector(il indexerLedgerForEval, genesisHash crypto.Digest, latestRound basics.Round, roundResources EvalForIndexerResources) indexerLedgerConnector {
+func makeIndexerLedgerConnector(il indexerLedgerForEval, genesisHash crypto.Digest, genesisProto config.ConsensusParams, latestRound basics.Round, roundResources EvalForIndexerResources) indexerLedgerConnector {
return indexerLedgerConnector{
il: il,
genesisHash: genesisHash,
+ genesisProto: genesisProto,
latestRound: latestRound,
roundResources: roundResources,
}
@@ -175,7 +182,7 @@ func makeIndexerLedgerConnector(il indexerLedgerForEval, genesisHash crypto.Dige
// close amount for each transaction even when the real consensus parameters do not
// support it.
func EvalForIndexer(il indexerLedgerForEval, block *bookkeeping.Block, proto config.ConsensusParams, resources EvalForIndexerResources) (ledgercore.StateDelta, []transactions.SignedTxnInBlock, error) {
- ilc := makeIndexerLedgerConnector(il, block.GenesisHash(), block.Round()-1, resources)
+ ilc := makeIndexerLedgerConnector(il, block.GenesisHash(), proto, block.Round()-1, resources)
eval, err := internal.StartEvaluator(
ilc, block.BlockHeader,
diff --git a/ledger/evalindexer_test.go b/ledger/evalindexer_test.go
index 105b49f99..6d944917d 100644
--- a/ledger/evalindexer_test.go
+++ b/ledger/evalindexer_test.go
@@ -251,6 +251,25 @@ func TestEvalForIndexerForExpiredAccounts(t *testing.T) {
require.NoError(t, err)
}
+func newTestLedger(t testing.TB, balances bookkeeping.GenesisBalances) *Ledger {
+ var genHash crypto.Digest
+ crypto.RandBytes(genHash[:])
+ genBlock, err := bookkeeping.MakeGenesisBlock(protocol.ConsensusFuture, balances, "test", genHash)
+ require.NoError(t, err)
+ require.False(t, genBlock.FeeSink.IsZero())
+ require.False(t, genBlock.RewardsPool.IsZero())
+ dbName := fmt.Sprintf("%s.%d", t.Name(), crypto.RandUint64())
+ cfg := config.GetDefaultLocal()
+ cfg.Archival = true
+ l, err := OpenLedger(logging.Base(), dbName, true, ledgercore.InitState{
+ Block: genBlock,
+ Accounts: balances.Balances,
+ GenesisHash: genHash,
+ }, cfg)
+ require.NoError(t, err)
+ return l
+}
+
// Test that preloading data in cow base works as expected.
func TestResourceCaching(t *testing.T) {
partitiontest.PartitionTest(t)
@@ -285,7 +304,8 @@ func TestResourceCaching(t *testing.T) {
},
}
- ilc := makeIndexerLedgerConnector(indexerLedgerForEvalImpl{l: l, latestRound: basics.Round(0)}, block.GenesisHash(), block.Round()-1, resources)
+ proto := config.Consensus[protocol.ConsensusFuture]
+ ilc := makeIndexerLedgerConnector(indexerLedgerForEvalImpl{l: l, latestRound: basics.Round(0)}, block.GenesisHash(), proto, block.Round()-1, resources)
{
accountData, rnd, err := ilc.LookupWithoutRewards(basics.Round(0), address)
diff --git a/ledger/internal/appcow.go b/ledger/internal/appcow.go
index 9d515c473..85511a713 100644
--- a/ledger/internal/appcow.go
+++ b/ledger/internal/appcow.go
@@ -256,8 +256,6 @@ func (cb *roundCowState) AllocateApp(addr basics.Address, aidx basics.AppIndex,
cb.mods.ModifiedAppLocalStates[aa] = true
}
- cb.trackCreatable(basics.CreatableIndex(aidx))
-
return nil
}
@@ -471,19 +469,16 @@ func MakeDebugBalances(l LedgerForCowBase, round basics.Round, proto protocol.Co
return cb
}
-// StatefulEval runs application.
-// Execution happens in a child cow and all modifications are merged into parent if the program passes
-func (cb *roundCowState) StatefulEval(params logic.EvalParams, aidx basics.AppIndex, program []byte) (pass bool, evalDelta transactions.EvalDelta, err error) {
+// StatefulEval runs application. Execution happens in a child cow and all
+// modifications are merged into parent and the ApplyData in params[gi] is
+// filled if the program passes.
+func (cb *roundCowState) StatefulEval(gi int, params *logic.EvalParams, aidx basics.AppIndex, program []byte) (pass bool, evalDelta transactions.EvalDelta, err error) {
// Make a child cow to eval our program in
calf := cb.child(1)
- params.Ledger, err = newLogicLedger(calf, aidx)
- if err != nil {
- return false, transactions.EvalDelta{}, err
- }
+ params.Ledger = newLogicLedger(calf)
// Eval the program
- var cx *logic.EvalContext
- pass, cx, err = logic.EvalStatefulCx(program, params)
+ pass, cx, err := logic.EvalContract(program, gi, aidx, params)
if err != nil {
var details string
if cx != nil {
@@ -495,13 +490,23 @@ func (cb *roundCowState) StatefulEval(params logic.EvalParams, aidx basics.AppIn
// If program passed, build our eval delta, and commit to state changes
if pass {
- evalDelta, err = calf.BuildEvalDelta(aidx, &params.Txn.Txn)
- if err != nil {
- return false, transactions.EvalDelta{}, err
+ // Before Contract to Contract calls, use BuildEvalDelta because it has
+ // hairy code to maintain compatibility over some buggy old versions
+ // that created EvalDeltas differently. But after introducing c2c, it's
+ // "too late" to build the EvalDelta here, since the ledger includes
+ // changes from this app and any inner called apps. Instead, we now keep
+ // the EvalDelta built as we go, in app evaluation. So just use it.
+ if cb.proto.LogicSigVersion < 6 {
+ evalDelta, err = calf.BuildEvalDelta(aidx, &params.TxnGroup[gi].Txn)
+ if err != nil {
+ return false, transactions.EvalDelta{}, err
+ }
+ evalDelta.Logs = params.TxnGroup[gi].EvalDelta.Logs
+ evalDelta.InnerTxns = params.TxnGroup[gi].EvalDelta.InnerTxns
+ } else {
+ evalDelta = params.TxnGroup[gi].EvalDelta
}
calf.commitToParent()
- evalDelta.Logs = cx.Logs
- evalDelta.InnerTxns = cx.InnerTxns
}
return pass, evalDelta, nil
diff --git a/ledger/internal/applications.go b/ledger/internal/applications.go
index 23a178d44..5fef80a21 100644
--- a/ledger/internal/applications.go
+++ b/ledger/internal/applications.go
@@ -22,19 +22,17 @@ import (
"github.com/algorand/go-algorand/config"
"github.com/algorand/go-algorand/data/basics"
"github.com/algorand/go-algorand/data/transactions"
+ "github.com/algorand/go-algorand/data/transactions/logic"
"github.com/algorand/go-algorand/ledger/apply"
"github.com/algorand/go-algorand/protocol"
)
type logicLedger struct {
- aidx basics.AppIndex
- creator basics.Address
- cow cowForLogicLedger
+ cow cowForLogicLedger
}
type cowForLogicLedger interface {
Get(addr basics.Address, withPendingRewards bool) (basics.AccountData, error)
- GetCreatableID(groupIdx int) basics.CreatableIndex
GetCreator(cidx basics.CreatableIndex, ctype basics.CreatableType) (basics.Address, bool, error)
GetKey(addr basics.Address, aidx basics.AppIndex, global bool, key string, accountIdx uint64) (basics.TealValue, bool, error)
BuildEvalDelta(aidx basics.AppIndex, txn *transactions.Transaction) (transactions.EvalDelta, error)
@@ -49,25 +47,10 @@ type cowForLogicLedger interface {
incTxnCount()
}
-func newLogicLedger(cow cowForLogicLedger, aidx basics.AppIndex) (*logicLedger, error) {
- if aidx == basics.AppIndex(0) {
- return nil, fmt.Errorf("cannot make logic ledger for app index 0")
+func newLogicLedger(cow cowForLogicLedger) *logicLedger {
+ return &logicLedger{
+ cow: cow,
}
-
- al := &logicLedger{
- aidx: aidx,
- cow: cow,
- }
-
- // Fetch app creator so we don't have to look it up every time we get/set/del
- // a key for this app's global state
- creator, err := al.fetchAppCreator(al.aidx)
- if err != nil {
- return nil, err
- }
- al.creator = creator
-
- return al, nil
}
func (al *logicLedger) Balance(addr basics.Address) (res basics.MicroAlgos, err error) {
@@ -100,10 +83,6 @@ func (al *logicLedger) Authorizer(addr basics.Address) (basics.Address, error) {
return addr, nil
}
-func (al *logicLedger) GetCreatableID(groupIdx int) basics.CreatableIndex {
- return al.cow.GetCreatableID(groupIdx)
-}
-
func (al *logicLedger) AssetHolding(addr basics.Address, assetIdx basics.AssetIndex) (basics.AssetHolding, error) {
// Fetch the requested balance record
record, err := al.cow.Get(addr, false)
@@ -185,34 +164,20 @@ func (al *logicLedger) LatestTimestamp() int64 {
return al.cow.prevTimestamp()
}
-func (al *logicLedger) ApplicationID() basics.AppIndex {
- return al.aidx
-}
-
-func (al *logicLedger) CreatorAddress() basics.Address {
- return al.creator
-}
-
func (al *logicLedger) OptedIn(addr basics.Address, appIdx basics.AppIndex) (bool, error) {
- if appIdx == basics.AppIndex(0) {
- appIdx = al.aidx
- }
return al.cow.allocated(addr, appIdx, false)
}
func (al *logicLedger) GetLocal(addr basics.Address, appIdx basics.AppIndex, key string, accountIdx uint64) (basics.TealValue, bool, error) {
- if appIdx == basics.AppIndex(0) {
- appIdx = al.aidx
- }
return al.cow.GetKey(addr, appIdx, false, key, accountIdx)
}
-func (al *logicLedger) SetLocal(addr basics.Address, key string, value basics.TealValue, accountIdx uint64) error {
- return al.cow.SetKey(addr, al.aidx, false, key, value, accountIdx)
+func (al *logicLedger) SetLocal(addr basics.Address, appIdx basics.AppIndex, key string, value basics.TealValue, accountIdx uint64) error {
+ return al.cow.SetKey(addr, appIdx, false, key, value, accountIdx)
}
-func (al *logicLedger) DelLocal(addr basics.Address, key string, accountIdx uint64) error {
- return al.cow.DelKey(addr, al.aidx, false, key, accountIdx)
+func (al *logicLedger) DelLocal(addr basics.Address, appIdx basics.AppIndex, key string, accountIdx uint64) error {
+ return al.cow.DelKey(addr, appIdx, false, key, accountIdx)
}
func (al *logicLedger) fetchAppCreator(appIdx basics.AppIndex) (basics.Address, error) {
@@ -229,9 +194,6 @@ func (al *logicLedger) fetchAppCreator(appIdx basics.AppIndex) (basics.Address,
}
func (al *logicLedger) GetGlobal(appIdx basics.AppIndex, key string) (basics.TealValue, bool, error) {
- if appIdx == basics.AppIndex(0) {
- appIdx = al.aidx
- }
addr, err := al.fetchAppCreator(appIdx)
if err != nil {
return basics.TealValue{}, false, err
@@ -239,16 +201,20 @@ func (al *logicLedger) GetGlobal(appIdx basics.AppIndex, key string) (basics.Tea
return al.cow.GetKey(addr, appIdx, true, key, 0)
}
-func (al *logicLedger) SetGlobal(key string, value basics.TealValue) error {
- return al.cow.SetKey(al.creator, al.aidx, true, key, value, 0)
-}
-
-func (al *logicLedger) DelGlobal(key string) error {
- return al.cow.DelKey(al.creator, al.aidx, true, key, 0)
+func (al *logicLedger) SetGlobal(appIdx basics.AppIndex, key string, value basics.TealValue) error {
+ creator, err := al.fetchAppCreator(appIdx)
+ if err != nil {
+ return err
+ }
+ return al.cow.SetKey(creator, appIdx, true, key, value, 0)
}
-func (al *logicLedger) GetDelta(txn *transactions.Transaction) (evalDelta transactions.EvalDelta, err error) {
- return al.cow.BuildEvalDelta(al.aidx, txn)
+func (al *logicLedger) DelGlobal(appIdx basics.AppIndex, key string) error {
+ creator, err := al.fetchAppCreator(appIdx)
+ if err != nil {
+ return err
+ }
+ return al.cow.DelKey(creator, appIdx, true, key, 0)
}
func (al *logicLedger) balances() (apply.Balances, error) {
@@ -259,23 +225,22 @@ func (al *logicLedger) balances() (apply.Balances, error) {
return balances, nil
}
-func (al *logicLedger) Perform(tx *transactions.Transaction, spec transactions.SpecialAddresses) (transactions.ApplyData, error) {
- var ad transactions.ApplyData
-
+func (al *logicLedger) Perform(gi int, ep *logic.EvalParams) error {
+ txn := &ep.TxnGroup[gi]
balances, err := al.balances()
if err != nil {
- return ad, err
+ return err
}
// move fee to pool
- err = balances.Move(tx.Sender, spec.FeeSink, tx.Fee, &ad.SenderRewards, nil)
+ err = balances.Move(txn.Txn.Sender, ep.Specials.FeeSink, txn.Txn.Fee, &txn.ApplyData.SenderRewards, nil)
if err != nil {
- return ad, err
+ return err
}
- err = apply.Rekey(balances, tx)
+ err = apply.Rekey(balances, &txn.Txn)
if err != nil {
- return ad, err
+ return err
}
// compared to eval.transaction() it may seem strange that we
@@ -289,26 +254,33 @@ func (al *logicLedger) Perform(tx *transactions.Transaction, spec transactions.S
// first glance.
al.cow.incTxnCount()
- switch tx.Type {
+ switch txn.Txn.Type {
case protocol.PaymentTx:
- err = apply.Payment(tx.PaymentTxnFields, tx.Header, balances, spec, &ad)
+ err = apply.Payment(txn.Txn.PaymentTxnFields, txn.Txn.Header, balances, *ep.Specials, &txn.ApplyData)
case protocol.KeyRegistrationTx:
- err = apply.Keyreg(tx.KeyregTxnFields, tx.Header, balances, spec, &ad, al.Round())
+ err = apply.Keyreg(txn.Txn.KeyregTxnFields, txn.Txn.Header, balances, *ep.Specials, &txn.ApplyData,
+ al.Round())
case protocol.AssetConfigTx:
- err = apply.AssetConfig(tx.AssetConfigTxnFields, tx.Header, balances, spec, &ad, al.cow.txnCounter())
+ err = apply.AssetConfig(txn.Txn.AssetConfigTxnFields, txn.Txn.Header, balances, *ep.Specials, &txn.ApplyData,
+ al.cow.txnCounter())
case protocol.AssetTransferTx:
- err = apply.AssetTransfer(tx.AssetTransferTxnFields, tx.Header, balances, spec, &ad)
+ err = apply.AssetTransfer(txn.Txn.AssetTransferTxnFields, txn.Txn.Header, balances, *ep.Specials, &txn.ApplyData)
+
case protocol.AssetFreezeTx:
- err = apply.AssetFreeze(tx.AssetFreezeTxnFields, tx.Header, balances, spec, &ad)
+ err = apply.AssetFreeze(txn.Txn.AssetFreezeTxnFields, txn.Txn.Header, balances, *ep.Specials, &txn.ApplyData)
+
+ case protocol.ApplicationCallTx:
+ err = apply.ApplicationCall(txn.Txn.ApplicationCallTxnFields, txn.Txn.Header, balances, &txn.ApplyData,
+ gi, ep, al.cow.txnCounter())
default:
- err = fmt.Errorf("%s tx in AVM", tx.Type)
+ err = fmt.Errorf("%s tx in AVM", txn.Txn.Type)
}
if err != nil {
- return ad, err
+ return err
}
// We don't check min balances during in app txns.
@@ -317,6 +289,10 @@ func (al *logicLedger) Perform(tx *transactions.Transaction, spec transactions.S
// it when the top-level txn concludes, as because cow will return
// all changed accounts in modifiedAccounts().
- return ad, nil
+ return nil
+
+}
+func (al *logicLedger) Counter() uint64 {
+ return al.cow.txnCounter()
}
diff --git a/ledger/internal/applications_test.go b/ledger/internal/applications_test.go
index 3ba517f74..c53989757 100644
--- a/ledger/internal/applications_test.go
+++ b/ledger/internal/applications_test.go
@@ -43,7 +43,6 @@ type mockCowForLogicLedger struct {
cr map[creatableLocator]basics.Address
brs map[basics.Address]basics.AccountData
stores map[storeLocator]basics.TealKeyValue
- tcs map[int]basics.CreatableIndex
txc uint64
}
@@ -55,10 +54,6 @@ func (c *mockCowForLogicLedger) Get(addr basics.Address, withPendingRewards bool
return br, nil
}
-func (c *mockCowForLogicLedger) GetCreatableID(groupIdx int) basics.CreatableIndex {
- return c.tcs[groupIdx]
-}
-
func (c *mockCowForLogicLedger) GetCreator(cidx basics.CreatableIndex, ctype basics.CreatableType) (basics.Address, bool, error) {
addr, found := c.cr[creatableLocator{cidx, ctype}]
return addr, found, nil
@@ -132,27 +127,9 @@ func TestLogicLedgerMake(t *testing.T) {
a := require.New(t)
- _, err := newLogicLedger(nil, 0)
- a.Error(err)
- a.Contains(err.Error(), "cannot make logic ledger for app index 0")
-
- addr := ledgertesting.RandomAddress()
- aidx := basics.AppIndex(1)
-
c := &mockCowForLogicLedger{}
- _, err = newLogicLedger(c, 0)
- a.Error(err)
- a.Contains(err.Error(), "cannot make logic ledger for app index 0")
-
- _, err = newLogicLedger(c, aidx)
- a.Error(err)
- a.Contains(err.Error(), fmt.Sprintf("app %d does not exist", aidx))
-
- c = newCowMock([]modsData{{addr, basics.CreatableIndex(aidx), basics.AppCreatable}})
- l, err := newLogicLedger(c, aidx)
- a.NoError(err)
+ l := newLogicLedger(c)
a.NotNil(l)
- a.Equal(aidx, l.aidx)
a.Equal(c, l.cow)
}
@@ -161,11 +138,8 @@ func TestLogicLedgerBalances(t *testing.T) {
a := require.New(t)
- addr := ledgertesting.RandomAddress()
- aidx := basics.AppIndex(1)
- c := newCowMock([]modsData{{addr, basics.CreatableIndex(aidx), basics.AppCreatable}})
- l, err := newLogicLedger(c, aidx)
- a.NoError(err)
+ c := newCowMock(nil)
+ l := newLogicLedger(c)
a.NotNil(l)
addr1 := ledgertesting.RandomAddress()
@@ -184,8 +158,7 @@ func TestLogicLedgerGetters(t *testing.T) {
addr := ledgertesting.RandomAddress()
aidx := basics.AppIndex(1)
c := newCowMock([]modsData{{addr, basics.CreatableIndex(aidx), basics.AppCreatable}})
- l, err := newLogicLedger(c, aidx)
- a.NoError(err)
+ l := newLogicLedger(c)
a.NotNil(l)
round := basics.Round(1234)
@@ -195,12 +168,9 @@ func TestLogicLedgerGetters(t *testing.T) {
addr1 := ledgertesting.RandomAddress()
c.stores = map[storeLocator]basics.TealKeyValue{{addr1, aidx, false}: {}}
- a.Equal(aidx, l.ApplicationID())
a.Equal(round, l.Round())
a.Equal(ts, l.LatestTimestamp())
- a.True(l.OptedIn(addr1, 0))
a.True(l.OptedIn(addr1, aidx))
- a.False(l.OptedIn(addr, 0))
a.False(l.OptedIn(addr, aidx))
}
@@ -217,11 +187,10 @@ func TestLogicLedgerAsset(t *testing.T) {
{addr, basics.CreatableIndex(aidx), basics.AppCreatable},
{addr1, basics.CreatableIndex(assetIdx), basics.AssetCreatable},
})
- l, err := newLogicLedger(c, aidx)
- a.NoError(err)
+ l := newLogicLedger(c)
a.NotNil(l)
- _, _, err = l.AssetParams(basics.AssetIndex(aidx))
+ _, _, err := l.AssetParams(basics.AssetIndex(aidx))
a.Error(err)
a.Contains(err.Error(), fmt.Sprintf("asset %d does not exist", aidx))
@@ -263,8 +232,7 @@ func TestLogicLedgerGetKey(t *testing.T) {
{addr, basics.CreatableIndex(aidx), basics.AppCreatable},
{addr1, basics.CreatableIndex(assetIdx), basics.AssetCreatable},
})
- l, err := newLogicLedger(c, aidx)
- a.NoError(err)
+ l := newLogicLedger(c)
a.NotNil(l)
_, ok, err := l.GetGlobal(basics.AppIndex(assetIdx), "gkey")
@@ -303,23 +271,22 @@ func TestLogicLedgerSetKey(t *testing.T) {
c := newCowMock([]modsData{
{addr, basics.CreatableIndex(aidx), basics.AppCreatable},
})
- l, err := newLogicLedger(c, aidx)
- a.NoError(err)
+ l := newLogicLedger(c)
a.NotNil(l)
tv := basics.TealValue{Type: basics.TealUintType, Uint: 1}
- err = l.SetGlobal("gkey", tv)
+ err := l.SetGlobal(aidx, "gkey", tv)
a.Error(err)
a.Contains(err.Error(), fmt.Sprintf("no store for (%s %d %v) in mock cow", addr, aidx, true))
tv2 := basics.TealValue{Type: basics.TealUintType, Uint: 2}
c.stores = map[storeLocator]basics.TealKeyValue{{addr, aidx, true}: {"gkey": tv}}
- err = l.SetGlobal("gkey", tv2)
+ err = l.SetGlobal(aidx, "gkey", tv2)
a.NoError(err)
// check local
c.stores = map[storeLocator]basics.TealKeyValue{{addr, aidx, false}: {"lkey": tv}}
- err = l.SetLocal(addr, "lkey", tv2, 0)
+ err = l.SetLocal(addr, aidx, "lkey", tv2, 0)
a.NoError(err)
}
@@ -333,21 +300,20 @@ func TestLogicLedgerDelKey(t *testing.T) {
c := newCowMock([]modsData{
{addr, basics.CreatableIndex(aidx), basics.AppCreatable},
})
- l, err := newLogicLedger(c, aidx)
- a.NoError(err)
+ l := newLogicLedger(c)
a.NotNil(l)
- err = l.DelGlobal("gkey")
+ err := l.DelGlobal(aidx, "gkey")
a.Error(err)
a.Contains(err.Error(), fmt.Sprintf("no store for (%s %d %v) in mock cow", addr, aidx, true))
tv := basics.TealValue{Type: basics.TealUintType, Uint: 1}
c.stores = map[storeLocator]basics.TealKeyValue{{addr, aidx, true}: {"gkey": tv}}
- err = l.DelGlobal("gkey")
+ err = l.DelGlobal(aidx, "gkey")
a.NoError(err)
addr1 := ledgertesting.RandomAddress()
c.stores = map[storeLocator]basics.TealKeyValue{{addr1, aidx, false}: {"lkey": tv}}
- err = l.DelLocal(addr1, "lkey", 0)
+ err = l.DelLocal(addr1, aidx, "lkey", 0)
a.NoError(err)
}
diff --git a/ledger/internal/apptxn_test.go b/ledger/internal/apptxn_test.go
new file mode 100644
index 000000000..fed5a4f06
--- /dev/null
+++ b/ledger/internal/apptxn_test.go
@@ -0,0 +1,2441 @@
+// Copyright (C) 2019-2022 Algorand, Inc.
+// This file is part of go-algorand
+//
+// go-algorand is free software: you can redistribute it and/or modify
+// it under the terms of the GNU Affero General Public License as
+// published by the Free Software Foundation, either version 3 of the
+// License, or (at your option) any later version.
+//
+// go-algorand is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU Affero General Public License for more details.
+//
+// You should have received a copy of the GNU Affero General Public License
+// along with go-algorand. If not, see <https://www.gnu.org/licenses/>.
+
+package internal_test
+
+import (
+ "encoding/hex"
+ "fmt"
+ "strings"
+ "testing"
+
+ "github.com/stretchr/testify/require"
+
+ "github.com/algorand/go-algorand/config"
+ "github.com/algorand/go-algorand/crypto"
+ "github.com/algorand/go-algorand/data/basics"
+ "github.com/algorand/go-algorand/data/bookkeeping"
+ "github.com/algorand/go-algorand/data/transactions/logic"
+ "github.com/algorand/go-algorand/data/txntest"
+ "github.com/algorand/go-algorand/ledger"
+ "github.com/algorand/go-algorand/ledger/ledgercore"
+ ledgertesting "github.com/algorand/go-algorand/ledger/testing"
+ "github.com/algorand/go-algorand/logging"
+ "github.com/algorand/go-algorand/protocol"
+ "github.com/algorand/go-algorand/test/partitiontest"
+)
+
+// main wraps up some TEAL source in a header and footer so that it is
+// an app that does nothing at create time, but otherwise runs source,
+// then approves, if the source avoids panicing and leaves the stack
+// empty.
+func main(source string) string {
+ return strings.Replace(fmt.Sprintf(`txn ApplicationID
+ bz end
+ %s
+ end: int 1`, source), ";", "\n", -1)
+}
+
+// TestPayAction ensures a pay in teal affects balances
+func TestPayAction(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genesisInitState, addrs, _ := ledgertesting.Genesis(10)
+
+ l, err := ledger.OpenLedger(logging.TestingLog(t), "", true, genesisInitState, config.GetDefaultLocal())
+ require.NoError(t, err)
+ defer l.Close()
+
+ create := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ int 5000
+ itxn_field Amount
+ txn Accounts 1
+ itxn_field Receiver
+ itxn_submit
+`),
+ }
+
+ ai := basics.AppIndex(1)
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: ai.Address(),
+ Amount: 200000, // account min balance, plus fees
+ }
+
+ payout1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: ai,
+ Accounts: []basics.Address{addrs[1]}, // pay self
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &create, &fund, &payout1)
+ vb := endBlock(t, l, eval)
+
+ // AD contains expected appIndex
+ require.Equal(t, ai, vb.Block().Payset[0].ApplyData.ApplicationID)
+
+ ad0 := micros(t, l, addrs[0])
+ ad1 := micros(t, l, addrs[1])
+ app := micros(t, l, ai.Address())
+
+ genAccounts := genesisInitState.Accounts
+ // create(1000) and fund(1000 + 200000)
+ require.Equal(t, uint64(202000), genAccounts[addrs[0]].MicroAlgos.Raw-ad0)
+ // paid 5000, but 1000 fee
+ require.Equal(t, uint64(4000), ad1-genAccounts[addrs[1]].MicroAlgos.Raw)
+ // app still has 194000 (paid out 5000, and paid fee to do it)
+ require.Equal(t, uint64(194000), app)
+
+ // Build up Residue in RewardsState so it's ready to pay
+ for i := 1; i < 10; i++ {
+ eval = nextBlock(t, l, true, nil)
+ endBlock(t, l, eval)
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ payout2 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: ai,
+ Accounts: []basics.Address{addrs[2]}, // pay other
+ }
+ txn(t, l, eval, &payout2)
+ // confirm that modifiedAccounts can see account in inner txn
+ vb = endBlock(t, l, eval)
+
+ deltas := vb.Delta()
+ require.Contains(t, deltas.Accts.ModifiedAccounts(), addrs[2])
+
+ payInBlock := vb.Block().Payset[0]
+ rewards := payInBlock.ApplyData.SenderRewards.Raw
+ require.Greater(t, rewards, uint64(2000)) // some biggish number
+ inners := payInBlock.ApplyData.EvalDelta.InnerTxns
+ require.Len(t, inners, 1)
+
+ // addr[2] is going to get the same rewards as addr[1], who
+ // originally sent the top-level txn. Both had their algo balance
+ // touched and has very nearly the same balance.
+ require.Equal(t, rewards, inners[0].ReceiverRewards.Raw)
+ // app gets none, because it has less than 1A
+ require.Equal(t, uint64(0), inners[0].SenderRewards.Raw)
+
+ ad1 = micros(t, l, addrs[1])
+ ad2 := micros(t, l, addrs[2])
+ app = micros(t, l, ai.Address())
+
+ // paid 5000, in first payout (only), but paid 1000 fee in each payout txn
+ require.Equal(t, rewards+3000, ad1-genAccounts[addrs[1]].MicroAlgos.Raw)
+ // app still has 188000 (paid out 10000, and paid 2k fees to do it)
+ // no rewards because owns less than an algo
+ require.Equal(t, uint64(200000)-10000-2000, app)
+
+ // paid 5000 by payout2, never paid any fees, got same rewards
+ require.Equal(t, rewards+uint64(5000), ad2-genAccounts[addrs[2]].MicroAlgos.Raw)
+
+ // Now fund the app account much more, so we can confirm it gets rewards.
+ tenkalgos := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: ai.Address(),
+ Amount: 10 * 1000 * 1000000, // account min balance, plus fees
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &tenkalgos)
+ endBlock(t, l, eval)
+ beforepay := micros(t, l, ai.Address())
+
+ // Build up Residue in RewardsState so it's ready to pay again
+ for i := 1; i < 10; i++ {
+ eval = nextBlock(t, l, true, nil)
+ endBlock(t, l, eval)
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, payout2.Noted("2"))
+ vb = endBlock(t, l, eval)
+
+ afterpay := micros(t, l, ai.Address())
+
+ payInBlock = vb.Block().Payset[0]
+ inners = payInBlock.ApplyData.EvalDelta.InnerTxns
+ require.Len(t, inners, 1)
+
+ appreward := inners[0].SenderRewards.Raw
+ require.Greater(t, appreward, uint64(1000))
+
+ require.Equal(t, beforepay+appreward-5000-1000, afterpay)
+}
+
+// TestAxferAction ensures axfers in teal have the intended effects
+func TestAxferAction(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genesisInitState, addrs, _ := ledgertesting.Genesis(10)
+
+ l, err := ledger.OpenLedger(logging.TestingLog(t), "", true, genesisInitState, config.GetDefaultLocal())
+ require.NoError(t, err)
+ defer l.Close()
+
+ asa := txntest.Txn{
+ Type: "acfg",
+ Sender: addrs[0],
+ AssetParams: basics.AssetParams{
+ Total: 1000000,
+ Decimals: 3,
+ UnitName: "oz",
+ AssetName: "Gold",
+ URL: "https://gold.rush/",
+ },
+ }
+
+ app := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int axfer
+ itxn_field TypeEnum
+ txn Assets 0
+ itxn_field XferAsset
+
+ txn ApplicationArgs 0
+ byte "optin"
+ ==
+ bz withdraw
+ // let AssetAmount default to 0
+ global CurrentApplicationAddress
+ itxn_field AssetReceiver
+ b submit
+withdraw:
+ txn ApplicationArgs 0
+ byte "close"
+ ==
+ bz noclose
+ txn Accounts 1
+ itxn_field AssetCloseTo
+ b skipamount
+noclose: int 10000
+ itxn_field AssetAmount
+skipamount:
+ txn Accounts 1
+ itxn_field AssetReceiver
+submit: itxn_submit
+`),
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &asa, &app)
+ vb := endBlock(t, l, eval)
+
+ asaIndex := basics.AssetIndex(1)
+ require.Equal(t, asaIndex, vb.Block().Payset[0].ApplyData.ConfigAsset)
+ appIndex := basics.AppIndex(2)
+ require.Equal(t, appIndex, vb.Block().Payset[1].ApplyData.ApplicationID)
+
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: appIndex.Address(),
+ Amount: 300000, // account min balance, optin min balance, plus fees
+ // stay under 1M, to avoid rewards complications
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &fund)
+ endBlock(t, l, eval)
+
+ fundgold := txntest.Txn{
+ Type: "axfer",
+ Sender: addrs[0],
+ XferAsset: asaIndex,
+ AssetReceiver: appIndex.Address(),
+ AssetAmount: 20000,
+ }
+
+ // Fail, because app account is not opted in.
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &fundgold, fmt.Sprintf("asset %d missing", asaIndex))
+ endBlock(t, l, eval)
+
+ amount, in := holding(t, l, appIndex.Address(), asaIndex)
+ require.False(t, in)
+ require.Equal(t, amount, uint64(0))
+
+ optin := txntest.Txn{
+ Type: "appl",
+ ApplicationID: appIndex,
+ Sender: addrs[0],
+ ApplicationArgs: [][]byte{[]byte("optin")},
+ ForeignAssets: []basics.AssetIndex{asaIndex},
+ }
+
+ // Tell the app to opt itself in.
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &optin)
+ endBlock(t, l, eval)
+
+ amount, in = holding(t, l, appIndex.Address(), asaIndex)
+ require.True(t, in)
+ require.Equal(t, amount, uint64(0))
+
+ // Now, suceed, because opted in.
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &fundgold)
+ endBlock(t, l, eval)
+
+ amount, in = holding(t, l, appIndex.Address(), asaIndex)
+ require.True(t, in)
+ require.Equal(t, amount, uint64(20000))
+
+ withdraw := txntest.Txn{
+ Type: "appl",
+ ApplicationID: appIndex,
+ Sender: addrs[0],
+ ApplicationArgs: [][]byte{[]byte("withdraw")},
+ ForeignAssets: []basics.AssetIndex{asaIndex},
+ Accounts: []basics.Address{addrs[0]},
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &withdraw)
+ endBlock(t, l, eval)
+
+ amount, in = holding(t, l, appIndex.Address(), asaIndex)
+ require.True(t, in)
+ require.Equal(t, amount, uint64(10000))
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, withdraw.Noted("2"))
+ endBlock(t, l, eval)
+
+ amount, in = holding(t, l, appIndex.Address(), asaIndex)
+ require.True(t, in) // Zero left, but still opted in
+ require.Equal(t, amount, uint64(0))
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, withdraw.Noted("3"), "underflow on subtracting")
+ endBlock(t, l, eval)
+
+ amount, in = holding(t, l, appIndex.Address(), asaIndex)
+ require.True(t, in) // Zero left, but still opted in
+ require.Equal(t, amount, uint64(0))
+
+ close := txntest.Txn{
+ Type: "appl",
+ ApplicationID: appIndex,
+ Sender: addrs[0],
+ ApplicationArgs: [][]byte{[]byte("close")},
+ ForeignAssets: []basics.AssetIndex{asaIndex},
+ Accounts: []basics.Address{addrs[0]},
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &close)
+ endBlock(t, l, eval)
+
+ amount, in = holding(t, l, appIndex.Address(), asaIndex)
+ require.False(t, in) // Zero left, not opted in
+ require.Equal(t, amount, uint64(0))
+
+ // Now, fail again, opted out
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, fundgold.Noted("2"), fmt.Sprintf("asset %d missing", asaIndex))
+ endBlock(t, l, eval)
+
+ // Do it all again, so we can test closeTo when we have a non-zero balance
+ // Tell the app to opt itself in.
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, optin.Noted("a"), fundgold.Noted("a"))
+ endBlock(t, l, eval)
+
+ amount, _ = holding(t, l, appIndex.Address(), asaIndex)
+ require.Equal(t, uint64(20000), amount)
+ left, _ := holding(t, l, addrs[0], asaIndex)
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, close.Noted("a"))
+ endBlock(t, l, eval)
+
+ amount, _ = holding(t, l, appIndex.Address(), asaIndex)
+ require.Equal(t, uint64(0), amount)
+ back, _ := holding(t, l, addrs[0], asaIndex)
+ require.Equal(t, uint64(20000), back-left)
+}
+
+func newTestLedger(t testing.TB, balances bookkeeping.GenesisBalances) *ledger.Ledger {
+ return newTestLedgerWithConsensusVersion(t, balances, protocol.ConsensusFuture)
+}
+
+func newTestLedgerWithConsensusVersion(t testing.TB, balances bookkeeping.GenesisBalances, cv protocol.ConsensusVersion) *ledger.Ledger {
+ var genHash crypto.Digest
+ crypto.RandBytes(genHash[:])
+ genBlock, err := bookkeeping.MakeGenesisBlock(cv, balances, "test", genHash)
+ require.NoError(t, err)
+ require.False(t, genBlock.FeeSink.IsZero())
+ require.False(t, genBlock.RewardsPool.IsZero())
+ dbName := fmt.Sprintf("%s.%d", t.Name(), crypto.RandUint64())
+ cfg := config.GetDefaultLocal()
+ cfg.Archival = true
+ l, err := ledger.OpenLedger(logging.Base(), dbName, true, ledgercore.InitState{
+ Block: genBlock,
+ Accounts: balances.Balances,
+ GenesisHash: genHash,
+ }, cfg)
+ require.NoError(t, err)
+ return l
+}
+
+// TestClawbackAction ensures an app address can act as clawback address.
+func TestClawbackAction(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ asaIndex := basics.AssetIndex(1)
+ appIndex := basics.AppIndex(2)
+
+ asa := txntest.Txn{
+ Type: "acfg",
+ Sender: addrs[0],
+ AssetParams: basics.AssetParams{
+ Total: 1000000,
+ Decimals: 3,
+ UnitName: "oz",
+ AssetName: "Gold",
+ URL: "https://gold.rush/",
+ Clawback: appIndex.Address(),
+ },
+ }
+
+ app := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+
+ int axfer
+ itxn_field TypeEnum
+
+ txn Assets 0
+ itxn_field XferAsset
+
+ txn Accounts 1
+ itxn_field AssetSender
+
+ txn Accounts 2
+ itxn_field AssetReceiver
+
+ int 1000
+ itxn_field AssetAmount
+
+ itxn_submit
+`),
+ }
+
+ optin := txntest.Txn{
+ Type: "axfer",
+ Sender: addrs[1],
+ AssetReceiver: addrs[1],
+ XferAsset: asaIndex,
+ }
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &asa, &app, &optin)
+ vb := endBlock(t, l, eval)
+
+ require.Equal(t, asaIndex, vb.Block().Payset[0].ApplyData.ConfigAsset)
+ require.Equal(t, appIndex, vb.Block().Payset[1].ApplyData.ApplicationID)
+
+ bystander := addrs[2] // Has no authority of its own
+ overpay := txntest.Txn{
+ Type: "pay",
+ Sender: bystander,
+ Receiver: bystander,
+ Fee: 2000, // Overpay fee so that app account can be unfunded
+ }
+ clawmove := txntest.Txn{
+ Type: "appl",
+ Sender: bystander,
+ ApplicationID: appIndex,
+ ForeignAssets: []basics.AssetIndex{asaIndex},
+ Accounts: []basics.Address{addrs[0], addrs[1]},
+ }
+ eval = nextBlock(t, l, true, nil)
+ err := txgroup(t, l, eval, &overpay, &clawmove)
+ require.NoError(t, err)
+ endBlock(t, l, eval)
+
+ amount, _ := holding(t, l, addrs[1], asaIndex)
+ require.Equal(t, amount, uint64(1000))
+}
+
+// TestRekeyAction ensures an app can transact for a rekeyed account
+func TestRekeyAction(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ appIndex := basics.AppIndex(1)
+ ezpayer := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[5],
+ ApprovalProgram: main(`
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ int 5000
+ itxn_field Amount
+ txn Accounts 1
+ itxn_field Sender
+ txn Accounts 2
+ itxn_field Receiver
+ txn NumAccounts
+ int 3
+ ==
+ bz skipclose
+ txn Accounts 3
+ itxn_field CloseRemainderTo
+skipclose:
+ itxn_submit
+`),
+ }
+
+ rekey := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: addrs[0],
+ RekeyTo: appIndex.Address(),
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &ezpayer, &rekey)
+ endBlock(t, l, eval)
+
+ useacct := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: appIndex,
+ Accounts: []basics.Address{addrs[0], addrs[2]}, // pay 2 from 0 (which was rekeyed)
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &useacct)
+ endBlock(t, l, eval)
+
+ // App was never funded (didn't spend from it's own acct)
+ require.Equal(t, uint64(0), micros(t, l, basics.AppIndex(1).Address()))
+ // addrs[2] got paid
+ require.Equal(t, uint64(5000), micros(t, l, addrs[2])-micros(t, l, addrs[6]))
+ // addrs[0] paid 5k + rekey fee + inner txn fee
+ require.Equal(t, uint64(7000), micros(t, l, addrs[6])-micros(t, l, addrs[0]))
+
+ baduse := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: appIndex,
+ Accounts: []basics.Address{addrs[2], addrs[0]}, // pay 0 from 2
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &baduse, "unauthorized")
+ endBlock(t, l, eval)
+
+ // Now, we close addrs[0], which wipes its rekey status. Reopen
+ // it, and make sure the app can't spend.
+
+ close := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: appIndex,
+ Accounts: []basics.Address{addrs[0], addrs[2], addrs[3]}, // close to 3
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &close)
+ endBlock(t, l, eval)
+
+ require.Equal(t, uint64(0), micros(t, l, addrs[0]))
+
+ payback := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[3],
+ Receiver: addrs[0],
+ Amount: 10_000_000,
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &payback)
+ endBlock(t, l, eval)
+
+ require.Equal(t, uint64(10_000_000), micros(t, l, addrs[0]))
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, useacct.Noted("2"), "unauthorized")
+ endBlock(t, l, eval)
+}
+
+// TestRekeyActionCloseAccount ensures closing and reopening a rekeyed account in a single app call
+// properly removes the app as an authorizer for the account
+func TestRekeyActionCloseAccount(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ appIndex := basics.AppIndex(1)
+ create := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[5],
+ ApprovalProgram: main(`
+ // close account 1
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ txn Accounts 1
+ itxn_field Sender
+ txn Accounts 2
+ itxn_field CloseRemainderTo
+ itxn_submit
+
+ // reopen account 1
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ int 5000
+ itxn_field Amount
+ txn Accounts 1
+ itxn_field Receiver
+ itxn_submit
+ // send from account 1 again (should fail because closing an account erases rekeying)
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ int 1
+ itxn_field Amount
+ txn Accounts 1
+ itxn_field Sender
+ txn Accounts 2
+ itxn_field Receiver
+ itxn_submit
+`),
+ }
+
+ rekey := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: addrs[0],
+ RekeyTo: appIndex.Address(),
+ }
+
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[1],
+ Receiver: appIndex.Address(),
+ Amount: 1_000_000,
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &create, &rekey, &fund)
+ endBlock(t, l, eval)
+
+ useacct := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: appIndex,
+ Accounts: []basics.Address{addrs[0], addrs[2]},
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &useacct, "unauthorized")
+ endBlock(t, l, eval)
+}
+
+// TestDuplicatePayAction shows two pays with same parameters can be done as inner tarnsactions
+func TestDuplicatePayAction(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ appIndex := basics.AppIndex(1)
+ create := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ int 5000
+ itxn_field Amount
+ txn Accounts 1
+ itxn_field Receiver
+ itxn_submit
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ int 5000
+ itxn_field Amount
+ txn Accounts 1
+ itxn_field Receiver
+ itxn_submit
+`),
+ }
+
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: appIndex.Address(),
+ Amount: 200000, // account min balance, plus fees
+ }
+
+ paytwice := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: appIndex,
+ Accounts: []basics.Address{addrs[1]}, // pay self
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &create, &fund, &paytwice, create.Noted("in same block"))
+ vb := endBlock(t, l, eval)
+
+ require.Equal(t, appIndex, vb.Block().Payset[0].ApplyData.ApplicationID)
+ require.Equal(t, 4, len(vb.Block().Payset))
+ // create=1, fund=2, payTwice=3,4,5
+ require.Equal(t, basics.AppIndex(6), vb.Block().Payset[3].ApplyData.ApplicationID)
+
+ ad0 := micros(t, l, addrs[0])
+ ad1 := micros(t, l, addrs[1])
+ app := micros(t, l, appIndex.Address())
+
+ // create(1000) and fund(1000 + 200000), extra create (1000)
+ require.Equal(t, 203000, int(genBalances.Balances[addrs[0]].MicroAlgos.Raw-ad0))
+ // paid 10000, but 1000 fee on tx
+ require.Equal(t, 9000, int(ad1-genBalances.Balances[addrs[1]].MicroAlgos.Raw))
+ // app still has 188000 (paid out 10000, and paid 2 x fee to do it)
+ require.Equal(t, 188000, int(app))
+
+ // Now create another app, and see if it gets the index we expect.
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, create.Noted("again"))
+ vb = endBlock(t, l, eval)
+
+ // create=1, fund=2, payTwice=3,4,5, insameblock=6
+ require.Equal(t, basics.AppIndex(7), vb.Block().Payset[0].ApplyData.ApplicationID)
+}
+
+// TestInnerTxCount ensures that inner transactions increment the TxnCounter
+func TestInnerTxnCount(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ create := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ int 5000
+ itxn_field Amount
+ txn Accounts 1
+ itxn_field Receiver
+ itxn_submit
+`),
+ }
+
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: basics.AppIndex(1).Address(),
+ Amount: 200000, // account min balance, plus fees
+ }
+
+ payout1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: basics.AppIndex(1),
+ Accounts: []basics.Address{addrs[1]}, // pay self
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &create, &fund)
+ vb := endBlock(t, l, eval)
+ require.Equal(t, 2, int(vb.Block().TxnCounter))
+
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &payout1)
+ vb = endBlock(t, l, eval)
+ require.Equal(t, 4, int(vb.Block().TxnCounter))
+}
+
+// TestAcfgAction ensures assets can be created and configured in teal
+func TestAcfgAction(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ appIndex := basics.AppIndex(1)
+ app := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int acfg
+ itxn_field TypeEnum
+
+ txn ApplicationArgs 0
+ byte "create"
+ ==
+ bz manager
+ int 1000000
+ itxn_field ConfigAssetTotal
+ int 3
+ itxn_field ConfigAssetDecimals
+ byte "oz"
+ itxn_field ConfigAssetUnitName
+ byte "Gold"
+ itxn_field ConfigAssetName
+ byte "https://gold.rush/"
+ itxn_field ConfigAssetURL
+
+ global CurrentApplicationAddress
+ dup
+ dup2
+ itxn_field ConfigAssetManager
+ itxn_field ConfigAssetReserve
+ itxn_field ConfigAssetFreeze
+ itxn_field ConfigAssetClawback
+ b submit
+manager:
+ // Put the current values in the itxn
+ txn Assets 0
+ asset_params_get AssetManager
+ assert // exists
+ itxn_field ConfigAssetManager
+
+ txn Assets 0
+ asset_params_get AssetReserve
+ assert // exists
+ itxn_field ConfigAssetReserve
+
+ txn Assets 0
+ asset_params_get AssetFreeze
+ assert // exists
+ itxn_field ConfigAssetFreeze
+
+ txn Assets 0
+ asset_params_get AssetClawback
+ assert // exists
+ itxn_field ConfigAssetClawback
+
+
+ txn ApplicationArgs 0
+ byte "manager"
+ ==
+ bz reserve
+ txn Assets 0
+ itxn_field ConfigAsset
+ txn ApplicationArgs 1
+ itxn_field ConfigAssetManager
+ b submit
+reserve:
+ txn ApplicationArgs 0
+ byte "reserve"
+ ==
+ bz freeze
+ txn Assets 0
+ itxn_field ConfigAsset
+ txn ApplicationArgs 1
+ itxn_field ConfigAssetReserve
+ b submit
+freeze:
+ txn ApplicationArgs 0
+ byte "freeze"
+ ==
+ bz clawback
+ txn Assets 0
+ itxn_field ConfigAsset
+ txn ApplicationArgs 1
+ itxn_field ConfigAssetFreeze
+ b submit
+clawback:
+ txn ApplicationArgs 0
+ byte "clawback"
+ ==
+ bz error
+ txn Assets 0
+ itxn_field ConfigAsset
+ txn ApplicationArgs 1
+ itxn_field ConfigAssetClawback
+ b submit
+error: err
+submit: itxn_submit
+`),
+ }
+
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: appIndex.Address(),
+ Amount: 200_000, // exactly account min balance + one asset
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app, &fund)
+ endBlock(t, l, eval)
+
+ createAsa := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: appIndex,
+ ApplicationArgs: [][]byte{[]byte("create")},
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ // Can't create an asset if you have exactly 200,000 and need to pay fee
+ txn(t, l, eval, &createAsa, "balance 199000 below min 200000")
+ // fund it some more and try again
+ txns(t, l, eval, fund.Noted("more!"), &createAsa)
+ vb := endBlock(t, l, eval)
+
+ asaIndex := vb.Block().Payset[1].EvalDelta.InnerTxns[0].ConfigAsset
+ require.Equal(t, basics.AssetIndex(5), asaIndex)
+
+ asaParams, err := asaParams(t, l, basics.AssetIndex(5))
+ require.NoError(t, err)
+
+ require.Equal(t, 1_000_000, int(asaParams.Total))
+ require.Equal(t, 3, int(asaParams.Decimals))
+ require.Equal(t, "oz", asaParams.UnitName)
+ require.Equal(t, "Gold", asaParams.AssetName)
+ require.Equal(t, "https://gold.rush/", asaParams.URL)
+
+ require.Equal(t, appIndex.Address(), asaParams.Manager)
+
+ for _, a := range []string{"reserve", "freeze", "clawback", "manager"} {
+ check := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: appIndex,
+ ApplicationArgs: [][]byte{[]byte(a), []byte("junkjunkjunkjunkjunkjunkjunkjunk")},
+ ForeignAssets: []basics.AssetIndex{asaIndex},
+ }
+ eval = nextBlock(t, l, true, nil)
+ t.Log(a)
+ txn(t, l, eval, &check)
+ endBlock(t, l, eval)
+ }
+ // Not the manager anymore so this won't work
+ nodice := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: appIndex,
+ ApplicationArgs: [][]byte{[]byte("freeze"), []byte("junkjunkjunkjunkjunkjunkjunkjunk")},
+ ForeignAssets: []basics.AssetIndex{asaIndex},
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &nodice, "this transaction should be issued by the manager")
+ endBlock(t, l, eval)
+
+}
+
+// TestAsaDuringInit ensures an ASA can be made while initilizing an
+// app. In practice, this is impossible, because you would not be
+// able to prefund the account - you don't know the app id. But here
+// we can know, so it helps exercise txncounter changes.
+func TestAsaDuringInit(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ appIndex := basics.AppIndex(2)
+ prefund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: appIndex.Address(),
+ Amount: 300000, // plenty for min balances, fees
+ }
+
+ app := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: `
+ itxn_begin
+ int acfg
+ itxn_field TypeEnum
+ int 1000000
+ itxn_field ConfigAssetTotal
+ byte "oz"
+ itxn_field ConfigAssetUnitName
+ byte "Gold"
+ itxn_field ConfigAssetName
+ itxn_submit
+ itxn CreatedAssetID
+ int 3
+ ==
+ assert
+ itxn CreatedApplicationID
+ int 0
+ ==
+ assert
+ itxn NumLogs
+ int 0
+ ==
+`,
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &prefund, &app)
+ vb := endBlock(t, l, eval)
+
+ require.Equal(t, appIndex, vb.Block().Payset[1].ApplicationID)
+
+ asaIndex := vb.Block().Payset[1].EvalDelta.InnerTxns[0].ConfigAsset
+ require.Equal(t, basics.AssetIndex(3), asaIndex)
+}
+
+func TestRekey(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ app := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ int 1
+ itxn_field Amount
+ global CurrentApplicationAddress
+ itxn_field Receiver
+ int 31
+ bzero
+ byte 0x01
+ concat
+ itxn_field RekeyTo
+ itxn_submit
+`),
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app)
+ vb := endBlock(t, l, eval)
+ appIndex := vb.Block().Payset[0].ApplicationID
+ require.Equal(t, basics.AppIndex(1), appIndex)
+
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: appIndex.Address(),
+ Amount: 1_000_000,
+ }
+ rekey := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: appIndex,
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &fund, &rekey)
+ txn(t, l, eval, rekey.Noted("2"), "unauthorized")
+ endBlock(t, l, eval)
+
+}
+
+func TestNote(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ app := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ int 0
+ itxn_field Amount
+ global CurrentApplicationAddress
+ itxn_field Receiver
+ byte "abcdefghijklmnopqrstuvwxyz01234567890"
+ itxn_field Note
+ itxn_submit
+`),
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app)
+ vb := endBlock(t, l, eval)
+ appIndex := vb.Block().Payset[0].ApplicationID
+ require.Equal(t, basics.AppIndex(1), appIndex)
+
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: appIndex.Address(),
+ Amount: 1_000_000,
+ }
+ note := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApplicationID: appIndex,
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &fund, &note)
+ vb = endBlock(t, l, eval)
+ alphabet := vb.Block().Payset[1].EvalDelta.InnerTxns[0].Txn.Note
+ require.Equal(t, "abcdefghijklmnopqrstuvwxyz01234567890", string(alphabet))
+}
+
+func TestKeyreg(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ app := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ txn ApplicationArgs 0
+ byte "pay"
+ ==
+ bz nonpart
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ int 1
+ itxn_field Amount
+ txn Sender
+ itxn_field Receiver
+ itxn_submit
+ int 1
+ return
+nonpart:
+ itxn_begin
+ int keyreg
+ itxn_field TypeEnum
+ int 1
+ itxn_field Nonparticipation
+ itxn_submit
+`),
+ }
+
+ // Create the app
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app)
+ vb := endBlock(t, l, eval)
+ appIndex := vb.Block().Payset[0].ApplicationID
+ require.Equal(t, basics.AppIndex(1), appIndex)
+
+ // Give the app a lot of money
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: appIndex.Address(),
+ Amount: 1_000_000_000,
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &fund)
+ endBlock(t, l, eval)
+
+ require.Equal(t, 1_000_000_000, int(micros(t, l, appIndex.Address())))
+
+ // Build up Residue in RewardsState so it's ready to pay
+ for i := 1; i < 10; i++ {
+ eval := nextBlock(t, l, true, nil)
+ endBlock(t, l, eval)
+ }
+
+ // pay a little
+ pay := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: appIndex,
+ ApplicationArgs: [][]byte{[]byte("pay")},
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &pay)
+ endBlock(t, l, eval)
+ // 2000 was earned in rewards (- 1000 fee, -1 pay)
+ require.Equal(t, 1_000_000_999, int(micros(t, l, appIndex.Address())))
+
+ // Go nonpart
+ nonpart := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: appIndex,
+ ApplicationArgs: [][]byte{[]byte("nonpart")},
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &nonpart)
+ endBlock(t, l, eval)
+ require.Equal(t, 999_999_999, int(micros(t, l, appIndex.Address())))
+
+ // Build up Residue in RewardsState so it's ready to pay AGAIN
+ // But expect no rewards
+ for i := 1; i < 100; i++ {
+ eval := nextBlock(t, l, true, nil)
+ endBlock(t, l, eval)
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, pay.Noted("again"))
+ txn(t, l, eval, nonpart.Noted("again"), "cannot change online/offline")
+ endBlock(t, l, eval)
+ // Paid fee + 1. Did not get rewards
+ require.Equal(t, 999_998_998, int(micros(t, l, appIndex.Address())))
+}
+
+func TestInnerAppCall(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ app0 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+ int 1
+ itxn_field Amount
+ txn Sender
+ itxn_field Receiver
+ itxn_submit
+`),
+ }
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &app0)
+ vb := endBlock(t, l, eval)
+ index0 := vb.Block().Payset[0].ApplicationID
+
+ app1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[1],
+ ApprovalProgram: main(`
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+`),
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app1)
+ vb = endBlock(t, l, eval)
+ index1 := vb.Block().Payset[0].ApplicationID
+
+ fund0 := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: index0.Address(),
+ Amount: 1_000_000_000,
+ }
+ fund1 := fund0
+ fund1.Receiver = index1.Address()
+
+ call1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[2],
+ ApplicationID: index1,
+ ForeignApps: []basics.AppIndex{index0},
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &fund0, &fund1, &call1)
+ endBlock(t, l, eval)
+
+}
+
+// TestInnerAppManipulate ensures that apps called from inner transactions make
+// the changes expected when invoked.
+func TestInnerAppManipulate(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ calleeIndex := basics.AppIndex(1)
+ callee := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ // This app set a global key arg[1] to arg[2] or get arg[1] and log it
+ ApprovalProgram: main(`
+ txn ApplicationArgs 0
+ byte "set"
+ ==
+ bz next1
+ txn ApplicationArgs 1
+ txn ApplicationArgs 2
+ app_global_put
+ b end
+next1:
+ txn ApplicationArgs 0
+ byte "get"
+ ==
+ bz next2
+ txn ApplicationArgs 1
+ app_global_get
+ log // Fails if key didn't exist, b/c TOS = 0
+ b end
+next2:
+ err
+`),
+ GlobalStateSchema: basics.StateSchema{
+ NumByteSlice: 1,
+ },
+ }
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: calleeIndex.Address(),
+ Amount: 1_000_000,
+ }
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &callee, &fund)
+ vb := endBlock(t, l, eval)
+ require.Equal(t, calleeIndex, vb.Block().Payset[0].ApplicationID)
+
+ callerIndex := basics.AppIndex(3)
+ caller := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ byte "set"
+ itxn_field ApplicationArgs
+ byte "X"
+ itxn_field ApplicationArgs
+ byte "A"
+ itxn_field ApplicationArgs
+ itxn_submit
+ itxn NumLogs
+ int 0
+ ==
+ assert
+ b end
+`),
+ }
+ fund.Receiver = callerIndex.Address()
+
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &caller, &fund)
+ vb = endBlock(t, l, eval)
+ require.Equal(t, callerIndex, vb.Block().Payset[0].ApplicationID)
+
+ call := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: callerIndex,
+ ForeignApps: []basics.AppIndex{calleeIndex},
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &call)
+ vb = endBlock(t, l, eval)
+ tib := vb.Block().Payset[0]
+ // No changes in the top-level EvalDelta
+ require.Empty(t, tib.EvalDelta.GlobalDelta)
+ require.Empty(t, tib.EvalDelta.LocalDeltas)
+
+ inner := tib.EvalDelta.InnerTxns[0]
+ require.Empty(t, inner.EvalDelta.LocalDeltas)
+
+ require.Len(t, inner.EvalDelta.GlobalDelta, 1)
+ require.Equal(t, basics.ValueDelta{
+ Action: basics.SetBytesAction,
+ Bytes: "A",
+ }, inner.EvalDelta.GlobalDelta["X"])
+}
+
+// TestCreateAndUse checks that an ASA can be created in an early tx, and then
+// used in a later app call tx (in the same group). This was not allowed until
+// v6, because of the strict adherence to the foreign-arrays rules.
+func TestCreateAndUse(t *testing.T) {
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ createapp := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int axfer; itxn_field TypeEnum
+ int 0; itxn_field Amount
+ gaid 0; itxn_field XferAsset
+ global CurrentApplicationAddress; itxn_field Sender
+ global CurrentApplicationAddress; itxn_field AssetReceiver
+ itxn_submit
+`),
+ }
+ appIndex := basics.AppIndex(1)
+
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: appIndex.Address(),
+ Amount: 1_000_000,
+ }
+
+ createasa := txntest.Txn{
+ Type: "acfg",
+ Sender: addrs[0],
+ AssetParams: basics.AssetParams{
+ Total: 1000000,
+ Decimals: 3,
+ UnitName: "oz",
+ AssetName: "Gold",
+ URL: "https://gold.rush/",
+ },
+ }
+ asaIndex := basics.AssetIndex(3)
+
+ use := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: basics.AppIndex(1),
+ // The point of this test is to show the following (psychic) setting is unnecessary.
+ //ForeignAssets: []basics.AssetIndex{asaIndex},
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &createapp)
+ txn(t, l, eval, &fund)
+ err := txgroup(t, l, eval, &createasa, &use)
+ require.NoError(t, err)
+ vb := endBlock(t, l, eval)
+
+ require.Equal(t, appIndex, vb.Block().Payset[0].ApplyData.ApplicationID)
+ require.Equal(t, asaIndex, vb.Block().Payset[2].ApplyData.ConfigAsset)
+}
+
+func TestGtxnEffects(t *testing.T) {
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ createapp := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ gtxn 0 CreatedAssetID
+ int 3
+ ==
+ assert
+`),
+ }
+ appIndex := basics.AppIndex(1)
+
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: appIndex.Address(),
+ Amount: 1_000_000,
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &createapp, &fund)
+
+ createasa := txntest.Txn{
+ Type: "acfg",
+ Sender: addrs[0],
+ AssetParams: basics.AssetParams{
+ Total: 1000000,
+ Decimals: 3,
+ UnitName: "oz",
+ AssetName: "Gold",
+ URL: "https://gold.rush/",
+ },
+ }
+ asaIndex := basics.AssetIndex(3)
+
+ see := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: basics.AppIndex(1),
+ }
+
+ err := txgroup(t, l, eval, &createasa, &see)
+ require.NoError(t, err)
+ vb := endBlock(t, l, eval)
+
+ require.Equal(t, appIndex, vb.Block().Payset[0].ApplyData.ApplicationID)
+ require.Equal(t, asaIndex, vb.Block().Payset[2].ApplyData.ConfigAsset)
+}
+
+func TestBasicReentry(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ app0 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+`),
+ }
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &app0)
+ vb := endBlock(t, l, eval)
+ index0 := vb.Block().Payset[0].ApplicationID
+
+ call1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[2],
+ ApplicationID: index0,
+ ForeignApps: []basics.AppIndex{index0},
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &call1, "self-call")
+ endBlock(t, l, eval)
+}
+
+func TestIndirectReentry(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ app0 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ txn Applications 2
+ itxn_field Applications
+ itxn_submit
+`),
+ }
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &app0)
+ vb := endBlock(t, l, eval)
+ index0 := vb.Block().Payset[0].ApplicationID
+
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: index0.Address(),
+ Amount: 1_000_000,
+ }
+
+ app1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+`),
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app1, &fund)
+ vb = endBlock(t, l, eval)
+ index1 := vb.Block().Payset[0].ApplicationID
+
+ call1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index0,
+ ForeignApps: []basics.AppIndex{index1, index0},
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &call1, "attempt to re-enter")
+ endBlock(t, l, eval)
+}
+
+// This tests a valid form of reentry (which may not be the correct word here).
+// When A calls B then returns to A then A calls C which calls B, the execution
+// should not produce an error because B doesn't occur in the call stack twice.
+func TestValidAppReentry(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ app0 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 2
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ txn Applications 2
+ itxn_field Applications
+ itxn_submit
+`),
+ }
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &app0)
+ vb := endBlock(t, l, eval)
+ index0 := vb.Block().Payset[0].ApplicationID
+
+ fund0 := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: index0.Address(),
+ Amount: 1_000_000,
+ }
+
+ app1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ int 3
+ int 3
+ ==
+ assert
+`),
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app1, &fund0)
+ vb = endBlock(t, l, eval)
+ index1 := vb.Block().Payset[0].ApplicationID
+
+ app2 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+`),
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &app2)
+ vb = endBlock(t, l, eval)
+ index2 := vb.Block().Payset[0].ApplicationID
+
+ fund2 := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: index2.Address(),
+ Amount: 1_000_000,
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &fund2)
+ _ = endBlock(t, l, eval)
+
+ call1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index0,
+ ForeignApps: []basics.AppIndex{index2, index1, index0},
+ }
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &call1)
+ endBlock(t, l, eval)
+}
+
+func TestMaxInnerTxFanOut(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ app0 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+`),
+ }
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &app0)
+ vb := endBlock(t, l, eval)
+ index0 := vb.Block().Payset[0].ApplicationID
+
+ fund0 := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: index0.Address(),
+ Amount: 1_000_000,
+ }
+
+ app1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ int 3
+ int 3
+ ==
+ assert
+`),
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app1, &fund0)
+ vb = endBlock(t, l, eval)
+ index1 := vb.Block().Payset[0].ApplicationID
+
+ callTxGroup := make([]txntest.Txn, 16)
+ for i := 0; i < 16; i++ {
+ callTxGroup[i] = txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index0,
+ ForeignApps: []basics.AppIndex{index1},
+ }
+ }
+ eval = nextBlock(t, l, true, nil)
+ err := txgroup(t, l, eval, &callTxGroup[0], &callTxGroup[1], &callTxGroup[2], &callTxGroup[3], &callTxGroup[4], &callTxGroup[5], &callTxGroup[6], &callTxGroup[7], &callTxGroup[8], &callTxGroup[9], &callTxGroup[10], &callTxGroup[11], &callTxGroup[12], &callTxGroup[13], &callTxGroup[14], &callTxGroup[15])
+ require.NoError(t, err)
+
+ endBlock(t, l, eval)
+}
+
+func TestExceedMaxInnerTxFanOut(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ var program string
+ for i := 0; i < 17; i++ {
+ program += `
+ itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+ itxn_submit
+`
+ }
+
+ // 17 inner txns
+ app0 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(program),
+ }
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &app0)
+ vb := endBlock(t, l, eval)
+ index0 := vb.Block().Payset[0].ApplicationID
+
+ fund0 := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: index0.Address(),
+ Amount: 1_000_000,
+ }
+
+ app1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ int 3
+ int 3
+ ==
+ assert
+`),
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app1, &fund0)
+ vb = endBlock(t, l, eval)
+ index1 := vb.Block().Payset[0].ApplicationID
+
+ callTxGroup := make([]txntest.Txn, 16)
+ for i := 0; i < 16; i++ {
+ callTxGroup[i] = txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index0,
+ ForeignApps: []basics.AppIndex{index1},
+ }
+ }
+ eval = nextBlock(t, l, true, nil)
+ err := txgroup(t, l, eval, &callTxGroup[0], &callTxGroup[1], &callTxGroup[2], &callTxGroup[3], &callTxGroup[4], &callTxGroup[5], &callTxGroup[6], &callTxGroup[7], &callTxGroup[8], &callTxGroup[9], &callTxGroup[10], &callTxGroup[11], &callTxGroup[12], &callTxGroup[13], &callTxGroup[14], &callTxGroup[15])
+ require.Error(t, err)
+
+ endBlock(t, l, eval)
+}
+
+func TestMaxInnerTxForSingleAppCall(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ program := `
+int 1
+loop:
+itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+itxn_submit
+int 1
++
+dup
+int 256
+<=
+bnz loop
+int 257
+==
+assert
+`
+
+ // 256 inner txns
+ app0 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(program),
+ }
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &app0)
+ vb := endBlock(t, l, eval)
+ index0 := vb.Block().Payset[0].ApplicationID
+
+ fund0 := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: index0.Address(),
+ Amount: 1_000_000,
+ }
+
+ app1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ int 3
+ int 3
+ ==
+ assert
+`),
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app1, &fund0)
+ vb = endBlock(t, l, eval)
+ index1 := vb.Block().Payset[0].ApplicationID
+
+ callTxGroup := make([]txntest.Txn, 16)
+ callTxGroup[0] = txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index0,
+ ForeignApps: []basics.AppIndex{index1},
+ }
+ for i := 1; i < 16; i++ {
+ callTxGroup[i] = txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index1,
+ ForeignApps: []basics.AppIndex{},
+ }
+ }
+ eval = nextBlock(t, l, true, nil)
+ err := txgroup(t, l, eval, &callTxGroup[0], &callTxGroup[1], &callTxGroup[2], &callTxGroup[3], &callTxGroup[4], &callTxGroup[5], &callTxGroup[6], &callTxGroup[7], &callTxGroup[8], &callTxGroup[9], &callTxGroup[10], &callTxGroup[11], &callTxGroup[12], &callTxGroup[13], &callTxGroup[14], &callTxGroup[15])
+ require.NoError(t, err)
+
+ endBlock(t, l, eval)
+}
+
+func TestExceedMaxInnerTxForSingleAppCall(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ program := `
+int 1
+loop:
+itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+itxn_submit
+int 1
++
+dup
+int 257
+<=
+bnz loop
+int 258
+==
+assert
+`
+
+ // 257 inner txns
+ app0 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(program),
+ }
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &app0)
+ vb := endBlock(t, l, eval)
+ index0 := vb.Block().Payset[0].ApplicationID
+
+ fund0 := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: index0.Address(),
+ Amount: 1_000_000,
+ }
+
+ app1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ int 3
+ int 3
+ ==
+ assert
+`),
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app1, &fund0)
+ vb = endBlock(t, l, eval)
+ index1 := vb.Block().Payset[0].ApplicationID
+
+ callTxGroup := make([]txntest.Txn, 16)
+ callTxGroup[0] = txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index0,
+ ForeignApps: []basics.AppIndex{index1},
+ }
+ for i := 1; i < 16; i++ {
+ callTxGroup[i] = txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index1,
+ ForeignApps: []basics.AppIndex{},
+ }
+ }
+ eval = nextBlock(t, l, true, nil)
+ err := txgroup(t, l, eval, &callTxGroup[0], &callTxGroup[1], &callTxGroup[2], &callTxGroup[3], &callTxGroup[4], &callTxGroup[5], &callTxGroup[6], &callTxGroup[7], &callTxGroup[8], &callTxGroup[9], &callTxGroup[10], &callTxGroup[11], &callTxGroup[12], &callTxGroup[13], &callTxGroup[14], &callTxGroup[15])
+ require.Error(t, err)
+
+ endBlock(t, l, eval)
+}
+
+func TestAbortWhenInnerAppCallFails(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ app0 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+itxn_begin
+ int appl
+ itxn_field TypeEnum
+ txn Applications 1
+ itxn_field ApplicationID
+itxn_submit
+int 1
+int 1
+==
+assert
+`),
+ }
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &app0)
+ vb := endBlock(t, l, eval)
+ index0 := vb.Block().Payset[0].ApplicationID
+
+ fund0 := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: index0.Address(),
+ Amount: 1_000_000,
+ }
+
+ app1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ int 3
+ int 2
+ ==
+ assert
+`),
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app1, &fund0)
+ vb = endBlock(t, l, eval)
+ index1 := vb.Block().Payset[0].ApplicationID
+
+ callTx := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index0,
+ ForeignApps: []basics.AppIndex{index1},
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &callTx, "logic eval error")
+ endBlock(t, l, eval)
+}
+
+func TestCreatedAppsAreAccessible(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ ops, err := logic.AssembleStringWithVersion("int 1\nint 1\nassert", logic.AssemblerMaxVersion)
+ require.NoError(t, err)
+ program := "byte 0x" + hex.EncodeToString(ops.Program)
+
+ createapp := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int appl; itxn_field TypeEnum
+ ` + program + `; itxn_field ApprovalProgram
+ ` + program + `; itxn_field ClearStateProgram
+ int 1; itxn_field GlobalNumUint
+ int 2; itxn_field LocalNumByteSlice
+ int 3; itxn_field LocalNumUint
+ itxn_submit`),
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &createapp)
+ vb := endBlock(t, l, eval)
+ index0 := vb.Block().Payset[0].ApplicationID
+
+ fund0 := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: index0.Address(),
+ Amount: 1_000_000,
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &fund0)
+ endBlock(t, l, eval)
+
+ callTx := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index0,
+ ForeignApps: []basics.AppIndex{},
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &callTx)
+ endBlock(t, l, eval)
+ index1 := basics.AppIndex(1)
+
+ callTx = txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index1,
+ ForeignApps: []basics.AppIndex{},
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &callTx)
+ endBlock(t, l, eval)
+}
+
+func TestInvalidAppsNotAccessible(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ app0 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+itxn_begin
+ int appl
+ itxn_field TypeEnum
+ int 2
+ itxn_field ApplicationID
+itxn_submit`),
+ }
+ eval := nextBlock(t, l, true, nil)
+ txn(t, l, eval, &app0)
+ vb := endBlock(t, l, eval)
+ index0 := vb.Block().Payset[0].ApplicationID
+
+ fund0 := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: index0.Address(),
+ Amount: 1_000_000,
+ }
+
+ app1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+int 2
+int 2
+==
+assert
+`),
+ }
+ eval = nextBlock(t, l, true, nil)
+ txns(t, l, eval, &app1, &fund0)
+ endBlock(t, l, eval)
+
+ callTx := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: index0,
+ ForeignApps: []basics.AppIndex{},
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &callTx, "invalid App reference 2")
+ endBlock(t, l, eval)
+}
+
+func TestInvalidAssetsNotAccessible(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+ l := newTestLedger(t, genBalances)
+ defer l.Close()
+
+ createapp := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: main(`
+ itxn_begin
+ int axfer; itxn_field TypeEnum
+ int 0; itxn_field Amount
+ int 3; itxn_field XferAsset
+ global CurrentApplicationAddress; itxn_field Sender
+ global CurrentApplicationAddress; itxn_field AssetReceiver
+ itxn_submit
+`),
+ }
+ appIndex := basics.AppIndex(1)
+
+ fund := txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: appIndex.Address(),
+ Amount: 1_000_000,
+ }
+
+ createasa := txntest.Txn{
+ Type: "acfg",
+ Sender: addrs[0],
+ AssetParams: basics.AssetParams{
+ Total: 1000000,
+ Decimals: 3,
+ UnitName: "oz",
+ AssetName: "Gold",
+ URL: "https://gold.rush/",
+ },
+ }
+
+ eval := nextBlock(t, l, true, nil)
+ txns(t, l, eval, &createapp, &fund, &createasa)
+ endBlock(t, l, eval)
+
+ use := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApplicationID: basics.AppIndex(1),
+ }
+
+ eval = nextBlock(t, l, true, nil)
+ txn(t, l, eval, &use, "invalid Asset reference 3")
+ endBlock(t, l, eval)
+}
+
+func executeMegaContract(b *testing.B) {
+ genBalances, addrs, _ := ledgertesting.NewTestGenesis()
+
+ vTest := config.Consensus[protocol.ConsensusFuture]
+ vTest.MaxAppProgramCost = 20000
+ var cv protocol.ConsensusVersion = "temp test"
+ config.Consensus[cv] = vTest
+
+ l := newTestLedgerWithConsensusVersion(b, genBalances, cv)
+ defer l.Close()
+ defer delete(config.Consensus, cv)
+
+ // app must use maximum memory then recursively create a new app with the same approval program.
+ // recursion is terminated when a depth of 256 is reached
+ // fill scratch space
+ // fill stack
+ depth := 255
+ createapp := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: `
+ int 0
+ loop:
+ dup
+ int 4096
+ bzero
+ stores
+ int 1
+ +
+ dup
+ int 256
+ <
+ bnz loop
+ pop
+ int 0
+ loop2:
+ int 4096
+ bzero
+ swap
+ int 1
+ +
+ dup
+ int 994
+ <=
+ bnz loop2
+ txna ApplicationArgs 0
+ btoi
+ int 1
+ -
+ dup
+ int 0
+ <=
+ bnz done
+ itxn_begin
+ itob
+ itxn_field ApplicationArgs
+ int appl
+ itxn_field TypeEnum
+ txn ApprovalProgram
+ itxn_field ApprovalProgram
+ txn ClearStateProgram
+ itxn_field ClearStateProgram
+ itxn_submit
+ done:
+ int 1
+ return`,
+ ApplicationArgs: [][]byte{{byte(depth)}},
+ ExtraProgramPages: 3,
+ }
+
+ funds := make([]*txntest.Txn, 256)
+ for i := 257; i <= 2*256; i++ {
+ funds[i-257] = &txntest.Txn{
+ Type: "pay",
+ Sender: addrs[0],
+ Receiver: basics.AppIndex(i).Address(),
+ Amount: 1_000_000,
+ }
+ }
+
+ eval := nextBlock(b, l, true, nil)
+ txns(b, l, eval, funds...)
+ endBlock(b, l, eval)
+
+ app1 := txntest.Txn{
+ Type: "appl",
+ Sender: addrs[0],
+ ApprovalProgram: `int 1`,
+ }
+
+ eval = nextBlock(b, l, true, nil)
+ err := txgroup(b, l, eval, &createapp, &app1, &app1, &app1, &app1, &app1, &app1)
+ require.NoError(b, err)
+ endBlock(b, l, eval)
+}
+
+func BenchmarkMaximumCallStackDepth(b *testing.B) {
+ for i := 0; i < b.N; i++ {
+ executeMegaContract(b)
+ }
+}
diff --git a/ledger/internal/assetcow.go b/ledger/internal/assetcow.go
index 2fbfb5ddc..c2013c30c 100644
--- a/ledger/internal/assetcow.go
+++ b/ledger/internal/assetcow.go
@@ -28,8 +28,6 @@ func (cs *roundCowState) AllocateAsset(addr basics.Address, index basics.AssetIn
Creator: addr,
Created: true,
}
-
- cs.trackCreatable(basics.CreatableIndex(index))
} else {
aa := ledgercore.AccountAsset{
Address: addr,
diff --git a/ledger/internal/cow.go b/ledger/internal/cow.go
index aab0ae256..92b003125 100644
--- a/ledger/internal/cow.go
+++ b/ledger/internal/cow.go
@@ -74,11 +74,6 @@ type roundCowState struct {
// cache mainaining accountIdx used in getKey for local keys access
compatibilityGetKeyCache map[basics.Address]map[storagePtr]uint64
- // index of a txn within a group; used in conjunction with trackedCreatables
- groupIdx int
- // track creatables created during each transaction in the round
- trackedCreatables map[int]basics.CreatableIndex
-
// prevTotals contains the accounts totals for the previous round. It's being used to calculate the totals for the new round
// so that we could perform the validation test on these to ensure the block evaluator generate a valid changeset.
prevTotals ledgercore.AccountTotals
@@ -86,13 +81,12 @@ type roundCowState struct {
func makeRoundCowState(b roundCowParent, hdr bookkeeping.BlockHeader, proto config.ConsensusParams, prevTimestamp int64, prevTotals ledgercore.AccountTotals, hint int) *roundCowState {
cb := roundCowState{
- lookupParent: b,
- commitParent: nil,
- proto: proto,
- mods: ledgercore.MakeStateDelta(&hdr, prevTimestamp, hint, 0),
- sdeltas: make(map[basics.Address]map[storagePtr]*storageDelta),
- trackedCreatables: make(map[int]basics.CreatableIndex),
- prevTotals: prevTotals,
+ lookupParent: b,
+ commitParent: nil,
+ proto: proto,
+ mods: ledgercore.MakeStateDelta(&hdr, prevTimestamp, hint, 0),
+ sdeltas: make(map[basics.Address]map[storagePtr]*storageDelta),
+ prevTotals: prevTotals,
}
// compatibilityMode retains producing application' eval deltas under the following rule:
@@ -149,10 +143,6 @@ func (cb *roundCowState) prevTimestamp() int64 {
return cb.mods.PrevTimestamp
}
-func (cb *roundCowState) getCreatableIndex(groupIdx int) basics.CreatableIndex {
- return cb.trackedCreatables[groupIdx]
-}
-
func (cb *roundCowState) getCreator(cidx basics.CreatableIndex, ctype basics.CreatableType) (creator basics.Address, ok bool, err error) {
delta, ok := cb.mods.Creatables[cidx]
if ok {
@@ -204,10 +194,6 @@ func (cb *roundCowState) blockHdr(r basics.Round) (bookkeeping.BlockHeader, erro
return cb.lookupParent.blockHdr(r)
}
-func (cb *roundCowState) trackCreatable(creatableIndex basics.CreatableIndex) {
- cb.trackedCreatables[cb.groupIdx] = creatableIndex
-}
-
func (cb *roundCowState) incTxnCount() {
cb.txnCount++
}
@@ -233,12 +219,6 @@ func (cb *roundCowState) child(hint int) *roundCowState {
sdeltas: make(map[basics.Address]map[storagePtr]*storageDelta),
}
- // clone tracked creatables
- ch.trackedCreatables = make(map[int]basics.CreatableIndex)
- for i, tc := range cb.trackedCreatables {
- ch.trackedCreatables[i] = tc
- }
-
if cb.compatibilityMode {
ch.compatibilityMode = cb.compatibilityMode
ch.compatibilityGetKeyCache = make(map[basics.Address]map[storagePtr]uint64)
@@ -246,11 +226,6 @@ func (cb *roundCowState) child(hint int) *roundCowState {
return &ch
}
-// setGroupIdx sets this transaction's index within its group
-func (cb *roundCowState) setGroupIdx(txnIdx int) {
- cb.groupIdx = txnIdx
-}
-
func (cb *roundCowState) commitToParent() {
cb.commitParent.mods.Accts.MergeAccounts(cb.mods.Accts)
diff --git a/ledger/internal/eval.go b/ledger/internal/eval.go
index 336fcb943..384c05902 100644
--- a/ledger/internal/eval.go
+++ b/ledger/internal/eval.go
@@ -286,10 +286,6 @@ func (cs *roundCowState) Get(addr basics.Address, withPendingRewards bool) (basi
return acct, nil
}
-func (cs *roundCowState) GetCreatableID(groupIdx int) basics.CreatableIndex {
- return cs.getCreatableIndex(groupIdx)
-}
-
func (cs *roundCowState) GetCreator(cidx basics.CreatableIndex, ctype basics.CreatableType) (basics.Address, bool, error) {
return cs.getCreator(cidx, ctype)
}
@@ -409,6 +405,7 @@ type BlockEvaluator struct {
type LedgerForEvaluator interface {
LedgerForCowBase
GenesisHash() crypto.Digest
+ GenesisProto() config.ConsensusParams
LatestTotals() (basics.Round, ledgercore.AccountTotals, error)
CompactCertVoters(basics.Round) (*ledgercore.VotersForRound, error)
}
@@ -718,57 +715,17 @@ func (eval *BlockEvaluator) Transaction(txn transactions.SignedTxn, ad transacti
})
}
-// prepareEvalParams creates a logic.EvalParams for each ApplicationCall
-// transaction in the group
-func (eval *BlockEvaluator) prepareEvalParams(txgroup []transactions.SignedTxnWithAD) []*logic.EvalParams {
- var groupNoAD []transactions.SignedTxn
- var pastSideEffects []logic.EvalSideEffects
- var minTealVersion uint64
- pooledApplicationBudget := uint64(0)
- var credit uint64
- res := make([]*logic.EvalParams, len(txgroup))
- for i, txn := range txgroup {
- // Ignore any non-ApplicationCall transactions
- if txn.SignedTxn.Txn.Type != protocol.ApplicationCallTx {
- continue
- }
- if eval.proto.EnableAppCostPooling {
- pooledApplicationBudget += uint64(eval.proto.MaxAppProgramCost)
- } else {
- pooledApplicationBudget = uint64(eval.proto.MaxAppProgramCost)
- }
-
- // Initialize side effects and group without ApplyData lazily
- if groupNoAD == nil {
- groupNoAD = make([]transactions.SignedTxn, len(txgroup))
- for j := range txgroup {
- groupNoAD[j] = txgroup[j].SignedTxn
- }
- pastSideEffects = logic.MakePastSideEffects(len(txgroup))
- minTealVersion = logic.ComputeMinTealVersion(groupNoAD)
- credit, _ = transactions.FeeCredit(groupNoAD, eval.proto.MinTxnFee)
- // intentionally ignoring error here, fees had to have been enough to get here
- }
-
- res[i] = &logic.EvalParams{
- Txn: &groupNoAD[i],
- Proto: &eval.proto,
- TxnGroup: groupNoAD,
- GroupIndex: uint64(i),
- PastSideEffects: pastSideEffects,
- MinTealVersion: &minTealVersion,
- PooledApplicationBudget: &pooledApplicationBudget,
- FeeCredit: &credit,
- Specials: &eval.specials,
- }
- }
- return res
+// TransactionGroup tentatively adds a new transaction group as part of this block evaluation.
+// If the transaction group cannot be added to the block without violating some constraints,
+// an error is returned and the block evaluator state is unchanged.
+func (eval *BlockEvaluator) TransactionGroup(txads []transactions.SignedTxnWithAD) error {
+ return eval.transactionGroup(txads)
}
-// TransactionGroup tentatively executes a group of transactions as part of this block evaluation.
+// transactionGroup tentatively executes a group of transactions as part of this block evaluation.
// If the transaction group cannot be added to the block without violating some constraints,
// an error is returned and the block evaluator state is unchanged.
-func (eval *BlockEvaluator) TransactionGroup(txgroup []transactions.SignedTxnWithAD) error {
+func (eval *BlockEvaluator) transactionGroup(txgroup []transactions.SignedTxnWithAD) error {
// Nothing to do if there are no transactions.
if len(txgroup) == 0 {
return nil
@@ -783,15 +740,14 @@ func (eval *BlockEvaluator) TransactionGroup(txgroup []transactions.SignedTxnWit
var groupTxBytes int
cow := eval.state.child(len(txgroup))
- evalParams := eval.prepareEvalParams(txgroup)
+ evalParams := logic.NewEvalParams(txgroup, &eval.proto, &eval.specials)
// Evaluate each transaction in the group
txibs = make([]transactions.SignedTxnInBlock, 0, len(txgroup))
for gi, txad := range txgroup {
var txib transactions.SignedTxnInBlock
- cow.setGroupIdx(gi)
- err := eval.transaction(txad.SignedTxn, evalParams[gi], txad.ApplyData, cow, &txib)
+ err := eval.transaction(txad.SignedTxn, evalParams, gi, txad.ApplyData, cow, &txib)
if err != nil {
return err
}
@@ -882,7 +838,7 @@ func (eval *BlockEvaluator) checkMinBalance(cow *roundCowState) error {
// transaction tentatively executes a new transaction as part of this block evaluation.
// If the transaction cannot be added to the block without violating some constraints,
// an error is returned and the block evaluator state is unchanged.
-func (eval *BlockEvaluator) transaction(txn transactions.SignedTxn, evalParams *logic.EvalParams, ad transactions.ApplyData, cow *roundCowState, txib *transactions.SignedTxnInBlock) error {
+func (eval *BlockEvaluator) transaction(txn transactions.SignedTxn, evalParams *logic.EvalParams, gi int, ad transactions.ApplyData, cow *roundCowState, txib *transactions.SignedTxnInBlock) error {
var err error
// Only compute the TxID once
@@ -916,7 +872,7 @@ func (eval *BlockEvaluator) transaction(txn transactions.SignedTxn, evalParams *
}
// Apply the transaction, updating the cow balances
- applyData, err := eval.applyTransaction(txn.Txn, cow, evalParams, cow.txnCounter())
+ applyData, err := eval.applyTransaction(txn.Txn, cow, evalParams, gi, cow.txnCounter())
if err != nil {
return fmt.Errorf("transaction %v: %v", txid, err)
}
@@ -953,13 +909,6 @@ func (eval *BlockEvaluator) transaction(txn transactions.SignedTxn, evalParams *
}
}
- // We are not allowing InnerTxns to have InnerTxns yet. Error if that happens.
- for _, itx := range applyData.EvalDelta.InnerTxns {
- if len(itx.ApplyData.EvalDelta.InnerTxns) > 0 {
- return fmt.Errorf("inner transaction has inner transactions %v", itx)
- }
- }
-
// Remember this txn
cow.addTx(txn.Txn, txid)
@@ -967,7 +916,7 @@ func (eval *BlockEvaluator) transaction(txn transactions.SignedTxn, evalParams *
}
// applyTransaction changes the balances according to this transaction.
-func (eval *BlockEvaluator) applyTransaction(tx transactions.Transaction, balances *roundCowState, evalParams *logic.EvalParams, ctr uint64) (ad transactions.ApplyData, err error) {
+func (eval *BlockEvaluator) applyTransaction(tx transactions.Transaction, balances *roundCowState, evalParams *logic.EvalParams, gi int, ctr uint64) (ad transactions.ApplyData, err error) {
params := balances.ConsensusParams()
// move fee to pool
@@ -998,7 +947,7 @@ func (eval *BlockEvaluator) applyTransaction(tx transactions.Transaction, balanc
err = apply.AssetFreeze(tx.AssetFreezeTxnFields, tx.Header, balances, eval.specials, &ad)
case protocol.ApplicationCallTx:
- err = apply.ApplicationCall(tx.ApplicationCallTxnFields, tx.Header, balances, &ad, evalParams, ctr)
+ err = apply.ApplicationCall(tx.ApplicationCallTxnFields, tx.Header, balances, &ad, gi, evalParams, ctr)
case protocol.CompactCertTx:
// in case of a CompactCertTx transaction, we want to "apply" it only in validate or generate mode. This will deviate the cow's CompactCertNext depending of
@@ -1014,6 +963,11 @@ func (eval *BlockEvaluator) applyTransaction(tx transactions.Transaction, balanc
err = fmt.Errorf("Unknown transaction type %v", tx.Type)
}
+ // Record first, so that details can all be used in logic evaluation, even
+ // if cleared below. For example, `gaid`, introduced in v28 is now
+ // implemented in terms of the AD fields introduced in v30.
+ evalParams.RecordAD(gi, ad)
+
// If the protocol does not support rewards in ApplyData,
// clear them out.
if !params.RewardsInApplyData {
@@ -1022,6 +976,14 @@ func (eval *BlockEvaluator) applyTransaction(tx transactions.Transaction, balanc
ad.CloseRewards = basics.MicroAlgos{}
}
+ // No separate config for activating these AD fields because inner
+ // transactions require their presence, so the consensus update to add
+ // inners also stores these IDs.
+ if params.MaxInnerTransactions == 0 {
+ ad.ApplicationID = 0
+ ad.ConfigAsset = 0
+ }
+
return
}
diff --git a/ledger/internal/eval_blackbox_test.go b/ledger/internal/eval_blackbox_test.go
index 90279a64b..0ac18c44c 100644
--- a/ledger/internal/eval_blackbox_test.go
+++ b/ledger/internal/eval_blackbox_test.go
@@ -620,6 +620,42 @@ func endBlock(t testing.TB, ledger *ledger.Ledger, eval *internal.BlockEvaluator
return validatedBlock
}
+// lookup gets the current accountdata for an address
+func lookup(t testing.TB, ledger *ledger.Ledger, addr basics.Address) basics.AccountData {
+ rnd := ledger.Latest()
+ ad, err := ledger.Lookup(rnd, addr)
+ require.NoError(t, err)
+ return ad
+}
+
+// micros gets the current microAlgo balance for an address
+func micros(t testing.TB, ledger *ledger.Ledger, addr basics.Address) uint64 {
+ return lookup(t, ledger, addr).MicroAlgos.Raw
+}
+
+// holding gets the current balance and optin status for some asa for an address
+func holding(t testing.TB, ledger *ledger.Ledger, addr basics.Address, asset basics.AssetIndex) (uint64, bool) {
+ if holding, ok := lookup(t, ledger, addr).Assets[asset]; ok {
+ return holding.Amount, true
+ }
+ return 0, false
+}
+
+// asaParams gets the asset params for a given asa index
+func asaParams(t testing.TB, ledger *ledger.Ledger, asset basics.AssetIndex) (basics.AssetParams, error) {
+ creator, ok, err := ledger.GetCreator(basics.CreatableIndex(asset), basics.AssetCreatable)
+ if err != nil {
+ return basics.AssetParams{}, err
+ }
+ if !ok {
+ return basics.AssetParams{}, fmt.Errorf("no asset (%d)", asset)
+ }
+ if params, ok := lookup(t, ledger, creator).AssetParams[asset]; ok {
+ return params, nil
+ }
+ return basics.AssetParams{}, fmt.Errorf("bad lookup (%d)", asset)
+}
+
func TestRewardsInAD(t *testing.T) {
partitiontest.PartitionTest(t)
diff --git a/ledger/internal/eval_test.go b/ledger/internal/eval_test.go
index d8e7cd5fa..9e102e02f 100644
--- a/ledger/internal/eval_test.go
+++ b/ledger/internal/eval_test.go
@@ -22,7 +22,6 @@ import (
"errors"
"fmt"
"math/rand"
- "reflect"
"testing"
"github.com/stretchr/testify/assert"
@@ -35,8 +34,8 @@ import (
"github.com/algorand/go-algorand/data/basics"
"github.com/algorand/go-algorand/data/bookkeeping"
"github.com/algorand/go-algorand/data/transactions"
+ "github.com/algorand/go-algorand/data/transactions/logic"
"github.com/algorand/go-algorand/data/transactions/verify"
- "github.com/algorand/go-algorand/data/txntest"
"github.com/algorand/go-algorand/ledger/ledgercore"
ledgertesting "github.com/algorand/go-algorand/ledger/testing"
"github.com/algorand/go-algorand/protocol"
@@ -73,103 +72,128 @@ func TestBlockEvaluatorFeeSink(t *testing.T) {
require.Equal(t, eval.specials.FeeSink, testSinkAddr)
}
-func TestPrepareEvalParams(t *testing.T) {
- partitiontest.PartitionTest(t)
+func testEvalAppGroup(t *testing.T, schema basics.StateSchema) (*BlockEvaluator, basics.Address, error) {
+ genesisInitState, addrs, keys := ledgertesting.Genesis(10)
- eval := BlockEvaluator{
- prevHeader: bookkeeping.BlockHeader{
- TimeStamp: 1234,
- Round: 2345,
- },
+ genesisBalances := bookkeeping.GenesisBalances{
+ Balances: genesisInitState.Accounts,
+ FeeSink: testSinkAddr,
+ RewardsPool: testPoolAddr,
+ Timestamp: 0,
}
+ l := newTestLedger(t, genesisBalances)
- params := []config.ConsensusParams{
- {Application: true, MaxAppProgramCost: 700},
- config.Consensus[protocol.ConsensusV29],
- config.Consensus[protocol.ConsensusFuture],
- }
+ blkHeader, err := l.BlockHdr(basics.Round(0))
+ require.NoError(t, err)
+ newBlock := bookkeeping.MakeBlock(blkHeader)
+ eval, err := l.StartEvaluator(newBlock.BlockHeader, 0, 0)
+ require.NoError(t, err)
+ eval.validate = true
+ eval.generate = false
- // Create some sample transactions
- payment := txntest.Txn{
- Type: protocol.PaymentTx,
- Sender: basics.Address{1, 2, 3, 4},
- Receiver: basics.Address{4, 3, 2, 1},
- Amount: 100,
- }.SignedTxnWithAD()
-
- appcall1 := txntest.Txn{
- Type: protocol.ApplicationCallTx,
- Sender: basics.Address{1, 2, 3, 4},
- ApplicationID: basics.AppIndex(1),
- }.SignedTxnWithAD()
-
- appcall2 := appcall1
- appcall2.SignedTxn.Txn.ApplicationCallTxnFields.ApplicationID = basics.AppIndex(2)
-
- type evalTestCase struct {
- group []transactions.SignedTxnWithAD
-
- // indicates if prepareAppEvaluators should return a non-nil
- // appTealEvaluator for the txn at index i
- expected []bool
-
- numAppCalls int
- // Used for checking transitive pointer equality in app calls
- // If there are no app calls in the group, it is set to -1
- firstAppCallIndex int
+ ops, err := logic.AssembleString(`#pragma version 2
+ txn ApplicationID
+ bz create
+ byte "caller"
+ txn Sender
+ app_global_put
+ b ok
+create:
+ byte "creator"
+ txn Sender
+ app_global_put
+ok:
+ int 1`)
+ require.NoError(t, err, ops.Errors)
+ approval := ops.Program
+ ops, err = logic.AssembleString("#pragma version 2\nint 1")
+ require.NoError(t, err)
+ clear := ops.Program
+
+ genHash := l.GenesisHash()
+ header := transactions.Header{
+ Sender: addrs[0],
+ Fee: minFee,
+ FirstValid: newBlock.Round(),
+ LastValid: newBlock.Round(),
+ GenesisHash: genHash,
+ }
+ appcall1 := transactions.Transaction{
+ Type: protocol.ApplicationCallTx,
+ Header: header,
+ ApplicationCallTxnFields: transactions.ApplicationCallTxnFields{
+ GlobalStateSchema: schema,
+ ApprovalProgram: approval,
+ ClearStateProgram: clear,
+ },
}
- // Create some groups with these transactions
- cases := []evalTestCase{
- {[]transactions.SignedTxnWithAD{payment}, []bool{false}, 0, -1},
- {[]transactions.SignedTxnWithAD{appcall1}, []bool{true}, 1, 0},
- {[]transactions.SignedTxnWithAD{payment, payment}, []bool{false, false}, 0, -1},
- {[]transactions.SignedTxnWithAD{appcall1, payment}, []bool{true, false}, 1, 0},
- {[]transactions.SignedTxnWithAD{payment, appcall1}, []bool{false, true}, 1, 1},
- {[]transactions.SignedTxnWithAD{appcall1, appcall2}, []bool{true, true}, 2, 0},
- {[]transactions.SignedTxnWithAD{appcall1, appcall2, appcall1}, []bool{true, true, true}, 3, 0},
- {[]transactions.SignedTxnWithAD{payment, appcall1, payment}, []bool{false, true, false}, 1, 1},
- {[]transactions.SignedTxnWithAD{appcall1, payment, appcall2}, []bool{true, false, true}, 2, 0},
+ appcall2 := transactions.Transaction{
+ Type: protocol.ApplicationCallTx,
+ Header: header,
+ ApplicationCallTxnFields: transactions.ApplicationCallTxnFields{
+ ApplicationID: 1,
+ },
}
- for i, param := range params {
- for j, testCase := range cases {
- t.Run(fmt.Sprintf("i=%d,j=%d", i, j), func(t *testing.T) {
- eval.proto = param
- res := eval.prepareEvalParams(testCase.group)
- require.Equal(t, len(res), len(testCase.group))
-
- // Compute the expected transaction group without ApplyData for
- // the test case
- expGroupNoAD := make([]transactions.SignedTxn, len(testCase.group))
- for k := range testCase.group {
- expGroupNoAD[k] = testCase.group[k].SignedTxn
- }
-
- // Ensure non app calls have a nil evaluator, and that non-nil
- // evaluators point to the right transactions and values
- for k, present := range testCase.expected {
- if present {
- require.NotNil(t, res[k])
- require.NotNil(t, res[k].PastSideEffects)
- require.Equal(t, res[k].GroupIndex, uint64(k))
- require.Equal(t, res[k].TxnGroup, expGroupNoAD)
- require.Equal(t, *res[k].Proto, eval.proto)
- require.Equal(t, *res[k].Txn, testCase.group[k].SignedTxn)
- require.Equal(t, res[k].MinTealVersion, res[testCase.firstAppCallIndex].MinTealVersion)
- require.Equal(t, res[k].PooledApplicationBudget, res[testCase.firstAppCallIndex].PooledApplicationBudget)
- if reflect.DeepEqual(param, config.Consensus[protocol.ConsensusV29]) {
- require.Equal(t, *res[k].PooledApplicationBudget, uint64(eval.proto.MaxAppProgramCost))
- } else if reflect.DeepEqual(param, config.Consensus[protocol.ConsensusFuture]) {
- require.Equal(t, *res[k].PooledApplicationBudget, uint64(eval.proto.MaxAppProgramCost*testCase.numAppCalls))
- }
- } else {
- require.Nil(t, res[k])
- }
- }
- })
- }
+ var group transactions.TxGroup
+ group.TxGroupHashes = []crypto.Digest{crypto.HashObj(appcall1), crypto.HashObj(appcall2)}
+ appcall1.Group = crypto.HashObj(group)
+ appcall2.Group = crypto.HashObj(group)
+ stxn1 := appcall1.Sign(keys[0])
+ stxn2 := appcall2.Sign(keys[0])
+
+ g := []transactions.SignedTxnWithAD{
+ {
+ SignedTxn: stxn1,
+ ApplyData: transactions.ApplyData{
+ EvalDelta: transactions.EvalDelta{GlobalDelta: map[string]basics.ValueDelta{
+ "creator": {Action: basics.SetBytesAction, Bytes: string(addrs[0][:])}},
+ },
+ ApplicationID: 1,
+ },
+ },
+ {
+ SignedTxn: stxn2,
+ ApplyData: transactions.ApplyData{
+ EvalDelta: transactions.EvalDelta{GlobalDelta: map[string]basics.ValueDelta{
+ "caller": {Action: basics.SetBytesAction, Bytes: string(addrs[0][:])}},
+ }},
+ },
+ }
+ txgroup := []transactions.SignedTxn{stxn1, stxn2}
+ err = eval.TestTransactionGroup(txgroup)
+ if err != nil {
+ return eval, addrs[0], err
}
+ err = eval.TransactionGroup(g)
+ return eval, addrs[0], err
+}
+
+// TestEvalAppStateCountsWithTxnGroup ensures txns in a group can't violate app state schema limits
+// the test ensures that
+// commitToParent -> applyChild copies child's cow state usage counts into parent
+// and the usage counts correctly propagated from parent cow to child cow and back
+func TestEvalAppStateCountsWithTxnGroup(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ _, _, err := testEvalAppGroup(t, basics.StateSchema{NumByteSlice: 1})
+ require.Error(t, err)
+ require.Contains(t, err.Error(), "store bytes count 2 exceeds schema bytes count 1")
+}
+
+// TestEvalAppAllocStateWithTxnGroup ensures roundCowState.deltas and applyStorageDelta
+// produce correct results when a txn group has storage allocate and storage update actions
+func TestEvalAppAllocStateWithTxnGroup(t *testing.T) {
+ partitiontest.PartitionTest(t)
+
+ eval, addr, err := testEvalAppGroup(t, basics.StateSchema{NumByteSlice: 2})
+ require.NoError(t, err)
+ deltas := eval.state.deltas()
+ ad, _ := deltas.Accts.Get(addr)
+ state := ad.AppParams[1].GlobalState
+ require.Equal(t, basics.TealValue{Type: basics.TealBytesType, Bytes: string(addr[:])}, state["caller"])
+ require.Equal(t, basics.TealValue{Type: basics.TealBytesType, Bytes: string(addr[:])}, state["creator"])
}
func TestCowCompactCert(t *testing.T) {
@@ -408,6 +432,7 @@ type evalTestLedger struct {
blocks map[basics.Round]bookkeeping.Block
roundBalances map[basics.Round]map[basics.Address]basics.AccountData
genesisHash crypto.Digest
+ genesisProto config.ConsensusParams
feeSink basics.Address
rewardsPool basics.Address
latestTotals ledgercore.AccountTotals
@@ -436,6 +461,7 @@ func newTestLedger(t testing.TB, balances bookkeeping.GenesisBalances) *evalTest
for _, acctData := range balances.Balances {
l.latestTotals.AddAccount(proto, acctData, &ot)
}
+ l.genesisProto = proto
require.False(t, genBlock.FeeSink.IsZero())
require.False(t, genBlock.RewardsPool.IsZero())
@@ -504,6 +530,11 @@ func (ledger *evalTestLedger) GenesisHash() crypto.Digest {
return ledger.genesisHash
}
+// GenesisProto returns the genesis hash for this ledger.
+func (ledger *evalTestLedger) GenesisProto() config.ConsensusParams {
+ return ledger.genesisProto
+}
+
// Latest returns the latest known block round added to the ledger.
func (ledger *evalTestLedger) Latest() basics.Round {
return basics.Round(len(ledger.blocks)).SubSaturate(1)
diff --git a/ledger/ledger_test.go b/ledger/ledger_test.go
index b28965755..d542dff93 100644
--- a/ledger/ledger_test.go
+++ b/ledger/ledger_test.go
@@ -688,6 +688,10 @@ func TestLedgerSingleTxV24(t *testing.T) {
badTx.ApplicationID = 0
err = l.appendUnvalidatedTx(t, initAccounts, initSecrets, badTx, ad)
a.Error(err)
+ a.Contains(err.Error(), "ApprovalProgram: invalid program (empty)")
+ badTx.ApprovalProgram = []byte{242}
+ err = l.appendUnvalidatedTx(t, initAccounts, initSecrets, badTx, ad)
+ a.Error(err)
a.Contains(err.Error(), "ApprovalProgram: invalid version")
correctAppCall.ApplicationID = appIdx
diff --git a/ledger/testing/testGenesis.go b/ledger/testing/testGenesis.go
index e150fafae..13f903cfa 100644
--- a/ledger/testing/testGenesis.go
+++ b/ledger/testing/testGenesis.go
@@ -87,7 +87,6 @@ func GenesisWithProto(naccts int, proto protocol.ConsensusVersion) (ledgercore.I
blk.BlockHeader.GenesisID = "test"
blk.FeeSink = testSinkAddr
blk.RewardsPool = testPoolAddr
-
crypto.RandBytes(blk.BlockHeader.GenesisHash[:])
addrs := []basics.Address{}
diff --git a/test/e2e-go/cli/goal/expect/statefulTealCreateAppTest.exp b/test/e2e-go/cli/goal/expect/statefulTealCreateAppTest.exp
index d4118ad67..0051d65cc 100644..100755
--- a/test/e2e-go/cli/goal/expect/statefulTealCreateAppTest.exp
+++ b/test/e2e-go/cli/goal/expect/statefulTealCreateAppTest.exp
@@ -121,7 +121,7 @@ proc statefulTealTest { TEST_ALGO_DIR TEST_DATA_DIR TEAL_PROGRAM} {
spawn goal app create --creator $PRIMARY_ACCOUNT_ADDRESS --approval-prog ${TEAL_PROGS_DIR}/${TEAL_PROGRAM} --global-byteslices $GLOBAL_BYTE_SLICES --global-ints 0 --local-byteslices $LOCAL_BYTE_SLICES --local-ints 0 --app-arg "str:hello" --clear-prog ${TEAL_PROGS_DIR}/${TEAL_PROGRAM} --extra-pages 5 -w $PRIMARY_WALLET_NAME -d $TEST_PRIMARY_NODE_DIR
expect {
timeout { puts timeout; ::AlgorandGoal::Abort "\n Failed to see expected output" }
- "tx.ExtraProgramPages too large, max number of extra pages is 3" {puts "received expected error"; close}
+ "tx.ExtraProgramPages exceeds MaxExtraAppProgramPages = 3" {puts "received expected error"; close}
eof { close; ::AlgorandGoal::Abort "did not receive expected error" }
}
diff --git a/test/scripts/e2e_subs/app-inner-calls.py b/test/scripts/e2e_subs/app-inner-calls.py
new file mode 100755
index 000000000..a027747b2
--- /dev/null
+++ b/test/scripts/e2e_subs/app-inner-calls.py
@@ -0,0 +1,149 @@
+#!/usr/bin/env python
+
+import os
+import sys
+from goal import Goal
+import algosdk.logic as logic
+
+from datetime import datetime
+
+stamp = datetime.now().strftime("%Y%m%d_%H%M%S")
+print(f"{os.path.basename(sys.argv[0])} start {stamp}")
+
+goal = Goal(sys.argv[1], autosend=True)
+
+joe = goal.new_account()
+
+txinfo, err = goal.pay(goal.account, joe, amt=500_000)
+assert not err, err
+
+# Turn off rewards for precise balance checking
+txinfo, err = goal.keyreg(joe, nonpart=True)
+assert not err, err
+joeb = goal.balance(joe)
+
+# When invoked, this app funds the app that was created in the txn
+# before it and invokes its start(asset) method. Of course, this app must
+# be prefunded to do so. And in real life, would want to check its
+# sender as access control
+fund_previous = """
+#pragma version 6
+ txn ApplicationID
+ bz end
+
+ itxn_begin
+ int pay
+ itxn_field TypeEnum
+
+ txn GroupIndex
+ int 1
+ -
+ gtxns CreatedApplicationID
+ dup
+ store 0
+ app_params_get AppAddress
+ assert
+ itxn_field Receiver
+
+ int 1000000
+ itxn_field Amount
+
+ itxn_next
+
+ int appl
+ itxn_field TypeEnum
+
+ load 0
+ itxn_field ApplicationID
+
+ txn GroupIndex
+ int 2
+ -
+ gtxns CreatedAssetID
+ itxn_field Assets
+
+ method "start(asset)"
+ itxn_field ApplicationArgs
+
+ byte 0x00
+ itxn_field ApplicationArgs
+ itxn_submit
+
+
+end:
+ int 1
+"""
+
+txinfo, err = goal.app_create(joe, goal.assemble(fund_previous))
+assert not err, err
+funder = txinfo['application-index']
+assert funder
+
+# Fund the funder
+txinfo, err = goal.pay(goal.account, goal.app_address(funder), amt=4_000_000)
+assert not err, err
+
+# Construct a group that creates an ASA and an app, then "starts" the
+# new app by funding and invoking "start(asset)" on it. Inside the new
+# app's start() method, there will be yet another inner transaction:
+# it opts into the supplied asset.
+
+goal.autosend = False
+create_asa = goal.asset_create(joe, total=10_000, unit_name="oz", asset_name="Gold")
+app_teal = """
+#pragma version 6
+ txn ApplicationID
+ bz end
+ txn ApplicationArgs 0
+ method "start(asset)"
+ ==
+ bz next0
+
+ itxn_begin
+
+ int axfer
+ itxn_field TypeEnum
+
+ txn ApplicationArgs 1
+ btoi
+ txnas Assets
+ itxn_field XferAsset
+
+ global CurrentApplicationAddress
+ itxn_field AssetReceiver
+
+ itxn_submit
+
+next0:
+
+end:
+ int 1
+"""
+create_app = goal.app_create(joe, goal.assemble(app_teal))
+start_app = goal.app_call(joe, funder)
+
+[asa_info, app_info, start_info], err = goal.send_group([create_asa, create_app, start_app])
+assert not err, err
+
+goal.autosend = True
+
+import json
+
+asa_id = asa_info['asset-index']
+app_id = app_info['application-index']
+assert asa_id+1 == app_id
+app_account = logic.get_application_address(app_id)
+
+# Check balance on app account is right (1m - 1 optin fee)
+assert 1_000_000-1000 == goal.balance(app_account), goal.balance(app_account)
+assert 0 == goal.balance(app_account, asa_id)
+# Check min-balance on app account is right (base + 1 asa)
+assert 200_000 == goal.min_balance(app_account), goal.min_balance(app_account)
+
+# Ensure creator can send asa to app
+txinfo, err = goal.axfer(joe, app_account, 10, asa_id)
+assert not err, err
+assert 10 == goal.balance(app_account, asa_id)
+
+
+print(f"{os.path.basename(sys.argv[0])} OK {stamp}")
diff --git a/test/scripts/e2e_subs/goal/goal.py b/test/scripts/e2e_subs/goal/goal.py
index 921d2c3ea..d21be52ad 100755
--- a/test/scripts/e2e_subs/goal/goal.py
+++ b/test/scripts/e2e_subs/goal/goal.py
@@ -156,12 +156,12 @@ class Goal:
return txid, ""
return self.confirm(txid), ""
except algosdk.error.AlgodHTTPError as e:
- return (None, str(e))
+ return (None, e)
def send_group(self, txns, confirm=True):
# Need unsigned transactions to calculate the group This pulls
- # out the unsigned tx if tx is sigged, logigsigged or
- # multisgged
+ # out the unsigned tx if tx is sigged, logicsigged or
+ # multisigged
utxns = [
tx if isinstance(tx, txn.Transaction) else tx.transaction
for tx in txns
@@ -172,14 +172,15 @@ class Goal:
tx.group = gid
else:
tx.transaction.group = gid
+ txids = [utxn.get_txid() for utxn in utxns]
try:
stxns = [self.sign(tx) for tx in txns]
- txid = self.algod.send_transactions(stxns)
+ self.algod.send_transactions(stxns)
if not confirm:
- return txid, ""
- return self.confirm(txid), ""
+ return txids, None
+ return [self.confirm(txid) for txid in txids], None
except algosdk.error.AlgodHTTPError as e:
- return (None, str(e))
+ return (txids, e)
def status(self):
return self.algod.status()
@@ -196,7 +197,7 @@ class Goal:
def wait_for_block(self, block):
"""
- Utility function to wait until the block number given has been confirmed
+ Utility function to wait until the given block has been confirmed
"""
print(f"Waiting for block {block}.")
s = self.algod.status()
@@ -322,6 +323,10 @@ class Goal:
info = self.algod.account_info(account)
return info["amount"]
+ def min_balance(self, account):
+ info = self.algod.account_info(account)
+ return info["min-balance"]
+
def holding(self, account, asa):
info = self.algod.account_info(account)
for asset in info["assets"]: