aboutsummaryrefslogtreecommitdiff
path: root/big-step-exercise.ipynb
diff options
context:
space:
mode:
authorJack Sangdahl <[email protected]>2024-12-12 21:02:58 -0700
committerJack Sangdahl <[email protected]>2024-12-12 21:02:58 -0700
commit08f9cae13d0dce99d40a6d93432a84cb075bebfc (patch)
tree5714d73c616b511d8fa5ea8a0ef1d8405f13ce90 /big-step-exercise.ipynb
parent0d13c649bdabafde59233609be7bd88e273c7ae6 (diff)
downloadcsci3155-08f9cae13d0dce99d40a6d93432a84cb075bebfc.tar.gz
csci3155-08f9cae13d0dce99d40a6d93432a84cb075bebfc.zip
Unit 3
Signed-off-by: Jack Sangdahl <[email protected]>
Diffstat (limited to 'big-step-exercise.ipynb')
-rw-r--r--big-step-exercise.ipynb2342
1 files changed, 2342 insertions, 0 deletions
diff --git a/big-step-exercise.ipynb b/big-step-exercise.ipynb
new file mode 100644
index 0000000..d0e415d
--- /dev/null
+++ b/big-step-exercise.ipynb
@@ -0,0 +1,2342 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "bd949a01",
+ "metadata": {},
+ "source": [
+ "Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel $\\rightarrow$ Restart) and then **run all cells** (in the menubar, select Cell $\\rightarrow$ Run All).\n",
+ "\n",
+ "Make sure you fill in any place that says `???`, `YOUR CODE HERE`, \"???\", \"YOUR ANSWER HERE\", as well as your name and collaborators below:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "f45214a1",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "\u001b[36mNAME\u001b[39m: \u001b[32mString\u001b[39m = \u001b[32m\"Jack Sangdahl\"\u001b[39m\n",
+ "\u001b[36mCOLLABORATORS\u001b[39m: \u001b[32mString\u001b[39m = \u001b[32m\"N/A\"\u001b[39m"
+ ]
+ },
+ "execution_count": 1,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "val NAME = \"Jack Sangdahl\"\n",
+ "val COLLABORATORS = \"N/A\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8d26413f",
+ "metadata": {},
+ "source": [
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "efa5d5cf-c495-440c-8f26-17e20cad2a3f",
+ "metadata": {},
+ "source": [
+ "$\\newcommand{\\TirName}[1]{\\text{\\small #1}}\n",
+ "\\newcommand{\\inferrule}[3][]{\n",
+ " \\let\\and\\qquad\n",
+ " \\begin{array}{@{}l@{}}\n",
+ " \\TirName{#1}\n",
+ " \\\\\n",
+ " \\displaystyle\n",
+ " \\frac{#2}{#3}\n",
+ " \\end{array}\n",
+ "}\n",
+ "\\newcommand{\\infer}[3][]{\\inferrule[#1]{#2}{#3}}\n",
+ "$\n",
+ "\n",
+ "# Exercise: Big-Step Operational Semantics\n",
+ "\n",
+ "<!-- 3 Expressions -->\n",
+ "\n",
+ "<!-- 4 Binding and Scope -->\n",
+ "\n",
+ "<!-- 8 Recursion -->\n",
+ "\n",
+ "<!-- 9 Inductive Data Types -->\n",
+ "\n",
+ "<!-- 11 Concrete Syntax -->\n",
+ "\n",
+ "<!-- 12 Abstract Syntax and Parsing -->\n",
+ "\n",
+ "<!-- 13 Exercise: Syntax -->\n",
+ "\n",
+ "<!-- 14 Static Scoping -->\n",
+ "\n",
+ "<!-- 15 Judgments -->\n",
+ "\n",
+ "<!-- 16 Variables, Basic Values, and Judgments Lab -->\n",
+ "\n",
+ "<!-- 17 Operational Semantics -->\n",
+ "\n",
+ "<!-- 18 Functions and Dynamic Scoping -->\n",
+ "\n",
+ "<!-- 19 Big-Step Exercise -->\n",
+ "\n",
+ "### Learning Goals\n",
+ "\n",
+ "The primary learning goals of this assignment are to build intuition for\n",
+ "the following:\n",
+ "\n",
+ "- how to read a formal specification of a language semantics;\n",
+ "- how dynamic scoping arises; and\n",
+ "- big-step interpretation.\n",
+ "\n",
+ "### Instructions\n",
+ "\n",
+ "This assignment asks you to write Scala code. There are restrictions\n",
+ "associated with how you can solve these problems. Please pay careful\n",
+ "heed to those. If you are unsure, ask the course staff.\n",
+ "\n",
+ "Note that `???` indicates that there is a missing function or code\n",
+ "fragment that needs to be filled in. Make sure that you remove the `???`\n",
+ "and replace it with the answer.\n",
+ "\n",
+ "Use the test cases provided to test your implementations. You are also\n",
+ "encouraged to write your own test cases to help debug your work.\n",
+ "However, please delete any extra cells you may have created lest they\n",
+ "break an autograder.\n",
+ "\n",
+ "### Imports"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "88052e49",
+ "metadata": {
+ "deletable": false,
+ "editable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "afc1ca0eb6ddf4e8f942a9b59b4d2e8e",
+ "grade": false,
+ "grade_id": "testing",
+ "locked": true,
+ "schema_version": 3,
+ "solution": false,
+ "task": false
+ }
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "\u001b[32mimport \u001b[39m\u001b[36mRun this cell FIRST before testing.\n",
+ "import $ivy.$ , org.scalatest._, events._, flatspec._\n",
+ "\u001b[39m\n",
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mreport\u001b[39m\n",
+ "defined \u001b[32mfunction\u001b[39m \u001b[36massertPassed\u001b[39m\n",
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mpassed\u001b[39m\n",
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mtest\u001b[39m"
+ ]
+ },
+ "execution_count": 2,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "// Run this cell FIRST before testing.\n",
+ "import $ivy.`org.scalatest::scalatest:3.2.19`, org.scalatest._, events._, flatspec._\n",
+ "def report(suite: Suite): Unit = suite.execute(stats = true)\n",
+ "def assertPassed(suite: Suite): Unit =\n",
+ " suite.run(None, Args(new Reporter {\n",
+ " def apply(e: Event) = e match {\n",
+ " case e @ (_: TestFailed) => assert(false, s\"${e.message} (${e.testName})\")\n",
+ " case _ => ()\n",
+ " }\n",
+ " }))\n",
+ "def passed(points: Int): Unit = {\n",
+ " require(points >=0)\n",
+ " if (points == 1) println(\"*** 🎉 Tests Passed (1 point) ***\")\n",
+ " else println(s\"*** 🎉 Tests Passed ($points points) ***\")\n",
+ "}\n",
+ "def test(suite: Suite, points: Int): Unit = {\n",
+ " report(suite)\n",
+ " assertPassed(suite)\n",
+ " passed(points)\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "35a9dcee-4c64-4cdb-8ea2-aaca684dda00",
+ "metadata": {},
+ "source": [
+ "## A Big-Step Javascripty Interpreter\n",
+ "\n",
+ "We now have the formal tools to specify exactly how a JavaScripty\n",
+ "program should behave. Unless otherwise specified, we continue to try to\n",
+ "match JavaScript semantics, though we are no longer beholden to it.\n",
+ "Thus, it is still useful to write little test JavaScript programs and\n",
+ "see how the test should behave.\n",
+ "\n",
+ "In this exercise, we extend JavaScripty with functions. We try to\n",
+ "implement the `eval` function in the most straightforward way. What we\n",
+ "will discover is that we have made a historical mistake and have ended\n",
+ "up with a form of *dynamic scoping*.\n",
+ "\n",
+ "For the purpose of this exercise, we will limit the scope of JavaScripty\n",
+ "by restricting expression forms and simplifying semantics as appropriate\n",
+ "for pedagogical purposes. In particular, we simplify the semantics by no\n",
+ "longer performing implicit type coercions.\n",
+ "\n",
+ "### Syntax\n",
+ "\n",
+ "We consider the following *abstract syntax* for this exercise. Note that\n",
+ "new constructs for functions are $\\color{red}\\text{highlighted}$.\n",
+ "\n",
+ "$$\n",
+ "\\begin{array}{rrrl}\n",
+ " \\text{expressions} & e& \\mathrel{::=}&\n",
+ " n\n",
+ " \\mid b\n",
+ " \\mid e_1 \\mathbin{\\mathit{bop}}e_2\n",
+ " \\mid e_1\\;\\texttt{?}\\;e_2\\;\\texttt{:}\\;e_3\n",
+ " \\mid x\n",
+ " \\mid\\mathbf{const}\\;x\\;\\texttt{=}\\;e_1\\texttt{;}\\;e_2 \\\\ \n",
+ " && \\color{red}\\mid& \\color{red}\n",
+ " \\texttt{(}x\\texttt{)} \\mathrel{\\texttt{=}\\!\\texttt{>}} e_1\n",
+ " \\mid e_1\\texttt{(}e_2\\texttt{)}\n",
+ " \\\\\n",
+ " \\text{values} & v& \\mathrel{::=}&\n",
+ " n\n",
+ " \\mid b\n",
+ " \\color{red}\n",
+ " \\mid\\texttt{(}x\\texttt{)} \\mathrel{\\texttt{=}\\!\\texttt{>}} e_1\n",
+ " \\\\\n",
+ " \\text{binary operators} & \\mathit{bop}& \\mathrel{::=}&\n",
+ " \\texttt{+}\n",
+ " \\mid\\texttt{===}\n",
+ " \\mid\\texttt{!==}\n",
+ " \\\\\n",
+ " \\text{variables} & x\n",
+ " \\\\\n",
+ " \\text{numbers} & n\n",
+ " \\\\\n",
+ " \\text{booleans} & b\n",
+ "\\end{array}\n",
+ "$$\n",
+ "\n",
+ "Observe that we consider only base values numbers $n$ and booleans $b$\n",
+ "and have significantly reduced the number of expression forms we\n",
+ "consider.\n",
+ "\n",
+ "Like in the book chapter, all functions are one argument functions for\n",
+ "simplicity.\n",
+ "\n",
+ "## Dynamic Scoping Test\n",
+ "\n",
+ "<span class=\"theorem-title\">**Exercise 1 (5 points)**</span> Write a\n",
+ "JavaScript program that behaves differently under dynamic scoping versus\n",
+ "static scoping (and does not crash). This will get us used to the\n",
+ "syntax, while providing a crucial test case for our interpreter.\n",
+ "\n",
+ "**Edit this cell:**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "id": "70a3be6f-d318-40af-b9e6-b1dd9638319c",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "61ecbdea151ec78eedf98d1336ed1dc2",
+ "grade": true,
+ "grade_id": "dynamic-scoping-test",
+ "locked": false,
+ "points": 5,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "cmd20.sc:1: not found: value const\n",
+ "val res20_0 = const x = 10;\n",
+ " ^\n",
+ "cmd20.sc:2: not found: value const\n",
+ "val res20_1 = const f = (a) => {\n",
+ " ^\n",
+ "cmd20.sc:6: not found: value const\n",
+ "val res20_2 = const g = (b) => {\n",
+ " ^\n",
+ "cmd20.sc:10: not found: value console\n",
+ "val res20_3 = console.log(f(-1))\n",
+ " ^\n",
+ "cmd20.sc:10: not found: value f\n",
+ "val res20_3 = console.log(f(-1))\n",
+ " ^\n",
+ "cmd20.sc:1: postfix operator x needs to be enabled\n",
+ "by making the implicit value scala.language.postfixOps visible.\n",
+ "This can be achieved by adding the import clause 'import scala.language.postfixOps'\n",
+ "or by setting the compiler option -language:postfixOps.\n",
+ "See the Scaladoc for value scala.language.postfixOps for a discussion\n",
+ "why the feature needs to be explicitly enabled.\n",
+ "val res20_0 = const x = 10;\n",
+ " ^\n",
+ "cmd20.sc:2: postfix operator f needs to be enabled\n",
+ "by making the implicit value scala.language.postfixOps visible.\n",
+ "val res20_1 = const f = (a) => {\n",
+ " ^\n",
+ "cmd20.sc:6: postfix operator g needs to be enabled\n",
+ "by making the implicit value scala.language.postfixOps visible.\n",
+ "val res20_2 = const g = (b) => {\n",
+ " ^\n",
+ "Compilation Failed"
+ ]
+ }
+ ],
+ "source": [
+ "const x = 10;\n",
+ "const f = (a) => {\n",
+ " return x + a;\n",
+ "}\n",
+ "\n",
+ "const g = (b) => {\n",
+ " const x = 20;\n",
+ " return f(b);\n",
+ "}\n",
+ "console.log(f(-1))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1bd2d1c9-5101-4b29-8090-40482c978974",
+ "metadata": {},
+ "source": [
+ "Explain in 1-2 sentences why you think this program would behave\n",
+ "differently under dynamic scoping versus static scoping.\n",
+ "\n",
+ "**Edit this cell:**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cd4d28fb-9209-4946-b934-9d4d089d2469",
+ "metadata": {},
+ "source": [
+ "With **static scoping**, when `f` is called from inside `g`, the value of `x` inside `f` is based on where `f` is *defined*. Since here, `f` is defined in the global scope, which is 10, calling `f(-1)` will evaluate to $10 + (-1) = 9$. With **dynamic scoping**, the value of `x` will be based on the current context of the program. In `g`, the value of `x` is 20, which is the value that will be used in `f`. Meaning `f(-1)` will evaluate to $20 + (-1) = 19$."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "178c4974-726e-4a67-a922-8b658644f327",
+ "metadata": {},
+ "source": [
+ "#### Notes\n",
+ "\n",
+ "- We are using $\\mathbf{const}$ to *name* functions, that is, we are\n",
+ " binding an expression, which is a function, to a variable. This\n",
+ " binding allows us to get it later, but it *does not* allow us call\n",
+ " it inside the function definition (i.e., recursion).\n",
+ "- We are providing a throw-away parameter to our function because\n",
+ " according to our syntax functions have exactly one parameter.\n",
+ "- As noted above, we are simplifying some semantics in this exercise\n",
+ " compared with the previous lab: implicit type coercions work in\n",
+ " JavaScript and in the previous lab, but you will not include them in\n",
+ " your implementation on this homework. Therefore, *your test case\n",
+ " cannot have any implicit type conversions*.\n",
+ "- In order to execute the program, you will need to switch your kernal\n",
+ " to Deno, the Javascript kernel for Jupyter.\n",
+ "\n",
+ "## Reading an Operational Semantics\n",
+ "\n",
+ "In this homework, we start to see specifications of programming language\n",
+ "semantics. A big-step operational semantics of this small fragment of\n",
+ "JavaScripty is given below. Except perhaps for the assigned reading,\n",
+ "this figure\n",
+ "(<a href=\"#fig-javascripty-numbers-booleans-variables-limited\"\n",
+ "class=\"quarto-xref\">Figure 1</a>) may be one of the first times that you\n",
+ "are reading a formal semantics of a programming language. It may seem\n",
+ "daunting at first, but it will be become easier with practice. This\n",
+ "homework is such an opportunity to practice.\n",
+ "\n",
+ "$\\fbox{$E \\vdash e \\Downarrow v$}$\n",
+ "\n",
+ "$\\inferrule[EvalVar]{\n",
+ " \\phantom{\\Downarrow}\n",
+ "}{\n",
+ " E \\vdash x \\Downarrow E(x) \n",
+ "}$\n",
+ "\n",
+ "$\\inferrule[EvalConstDecl]{\n",
+ " E \\vdash e_1 \\Downarrow v_1 \n",
+ " \\and\n",
+ " E[\\mathop{}x \\mapsto v_1] \\vdash e_2 \\Downarrow v_2 \n",
+ "}{\n",
+ " E \\vdash \n",
+ " \\mathbf{const}\\;x\\;\\texttt{=}\\;e_1\\texttt{;}\\;e_2\n",
+ " \\Downarrow v_2 \n",
+ "}$\n",
+ "\n",
+ "$\\inferrule[EvalVal]{\n",
+ " \\phantom{ \\Downarrow}\n",
+ "}{\n",
+ " E \\vdash v \\Downarrow v\n",
+ "}$\n",
+ "\n",
+ "$\\inferrule[EvalPlusNumber]{\n",
+ " E \\vdash e_1 \\Downarrow n_1 \n",
+ " \\and\n",
+ " E \\vdash e_2 \\Downarrow n_2 \n",
+ "}{\n",
+ " E \\vdash \n",
+ " e_1 \\mathbin{\\texttt{+}} e_2\n",
+ " \\Downarrow n_1 \\mathbin{+} n_2 \n",
+ "}$\n",
+ "\n",
+ "$\\inferrule[EvalEquality]{\n",
+ " E \\vdash e_1 \\Downarrow v_1 \n",
+ " \\and\n",
+ " E \\vdash e_2 \\Downarrow v_2 \n",
+ " \\and\n",
+ " \\mathit{bop}\\in \\left\\{ \\texttt{===}, \\texttt{!==} \\right\\}\n",
+ "}{\n",
+ " E \\vdash \n",
+ " e_1 \\mathbin{\\mathit{bop}} e_2\n",
+ " \\Downarrow v_1 \\mathbin{\\mathit{bop}} v_2 \n",
+ "}$\n",
+ "\n",
+ "$\\inferrule[EvalIfTrue]{\n",
+ " E \\vdash e_1 \\Downarrow \\mathbf{true}\n",
+ " \\and\n",
+ " E \\vdash e_2 \\Downarrow v_2 \n",
+ "}{\n",
+ " E \\vdash \n",
+ " e_1\\;\\texttt{?}\\;e_2\\;\\texttt{:}\\;e_3\n",
+ " \\Downarrow v_2 \n",
+ "}$\n",
+ "\n",
+ "$\\inferrule[EvalIfFalse]{\n",
+ " E \\vdash e_1 \\Downarrow \\mathbf{false}\n",
+ " \\and\n",
+ " E \\vdash e_3 \\Downarrow v_3 \n",
+ "}{\n",
+ " E \\vdash \n",
+ " e_1\\;\\texttt{?}\\;e_2\\;\\texttt{:}\\;e_3\n",
+ " \\Downarrow v_3 \n",
+ "}$\n",
+ "\n",
+ "Figure 1: A big-step operational semantics of a fragment of JavaScripty\n",
+ "with some arithmetic and logic expressions, as well as variable binding.\n",
+ "We define the judgment form $E \\vdash e \\Downarrow v$, which says\n",
+ "informally, “In value environment $E$, expression $e$ evaluates to value\n",
+ "$v$.” This relation has three parameters: $E$, $e$, and $v$. You can see\n",
+ "the other parts of the judgment form as simply punctuation.\n",
+ "\n",
+ "A value environment $E$ is a finite map from variables $x$ to values $v$\n",
+ "that we write as follows: $$\n",
+ "\\begin{array}{rrrl}\n",
+ " \\text{value environments} & E, \\mathit{env}& \\mathrel{::=}& \\cdot \\mid E[\\mathop{}x \\mapsto v]\n",
+ "\\end{array}\n",
+ "$$\n",
+ "\n",
+ "We write $\\cdot$ for the empty environment and $E[\\mathop{}x \\mapsto v]$\n",
+ "as the environment that maps $x$ to $v$ but is otherwise the same as $E$\n",
+ "(i.e., extends $E$ with mapping $x$ to $v$). Additionally, we write\n",
+ "$E(x)$ for looking up the value of $x$ in environment $E$.\n",
+ "\n",
+ "A formal semantics enables us to describe the semantics of a programming\n",
+ "language clearly and concisely. The initial barrier is getting used to\n",
+ "the meta-language of judgment forms and inference rules. However, once\n",
+ "you cross that barrier, you will see that we are telling you exactly how\n",
+ "to implement the interpreter—it will almost feel like cheating!\n",
+ "\n",
+ "### Strings\n",
+ "\n",
+ "<span class=\"theorem-title\">**Exercise 2 (5 points)**</span> Suppose\n",
+ "that we extend the above language with strings $\\mathit{str}$ and a\n",
+ "string concatenation $e_1 \\mathbin{\\texttt{+}} e_2$ expression (like in\n",
+ "JavaScript). Consider the following inference rule for the evaluation\n",
+ "judgment form:\n",
+ "\n",
+ "$\\inferrule[EvalPlusString1]{\n",
+ " E \\vdash e_1 \\Downarrow \\mathit{str}_1 \n",
+ " \\and\n",
+ " E \\vdash e_2 \\Downarrow v_2 \n",
+ " \\and\n",
+ " v_2 \\rightsquigarrow \\mathit{str}_2\n",
+ "}{\n",
+ " E \\vdash \n",
+ " e_1 \\mathbin{\\texttt{+}} e_2\n",
+ " \\Downarrow \\mathit{str}_1 \\mathit{str}_2 \n",
+ "}$\n",
+ "\n",
+ "Explain in 1-2 sentences what $\\TirName{EvalPlusString1}$ is stating.\n",
+ "\n",
+ "***Edit this cell:***"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "779388e9-f998-4212-976b-7251afdf1d86",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "markdown",
+ "checksum": "533a1dfacd45dd6e555b71fee3c790b4",
+ "grade": true,
+ "grade_id": "evalplusstring1",
+ "locked": false,
+ "points": 5,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "source": [
+ "$\\TirName{EvalPlusString1}$ states that if an expression $e_1$ evaluates to a string $str_1$, and another expression $e_2$ evaluates to a value $v_2$ that can be coerced into a string $str_2$, then the combined expression of $e_1 + e_2$ evalutes to the concated string of $str_1\\:str_2$."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e03fe09c-8deb-4eb0-a399-c28b25955d63",
+ "metadata": {},
+ "source": [
+ "#### Notes\n",
+ "\n",
+ "- The $v \\rightsquigarrow \\mathit{str}$ judgment form says that value\n",
+ " $v$ coerces to string $\\mathit{str}$.\n",
+ "\n",
+ "<span class=\"theorem-title\">**Exercise 3 (5 points)**</span> Let us\n",
+ "define rules that specify evaluation of the expression\n",
+ "$e_1 \\mathbin{\\texttt{+}} e_2$ just like in JavaScript. Give the other\n",
+ "rule $\\TirName{EvalPlusString\\(_2\\)}$ that concatenates strings in the\n",
+ "case that $e_2$ evaluates to a string.\n",
+ "\n",
+ "***Edit this cell:***"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "497dd69b-2a86-428c-8f3d-4529239a9458",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "markdown",
+ "checksum": "b9a3806decfd219a093fa6b2226c5967",
+ "grade": true,
+ "grade_id": "evalplusstring2",
+ "locked": false,
+ "points": 5,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "source": [
+ "$\\inferrule[EvalPlusString1]{\n",
+ " E\\vdash e_1 \\Downarrow v_1\n",
+ " \\and\n",
+ " v_1 \\rightsquigarrow \\mathit{str}_1\n",
+ " \\and\n",
+ " E \\vdash e_2 \\Downarrow \\mathit{str}_2\n",
+ "} {\n",
+ " E \\vdash\n",
+ " e_1 \\mathbin{\\texttt{+}} e_2\n",
+ " \\Downarrow \\mathit{str}_1 \\mathit{str}_2\n",
+ "}$"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "164e0114-96fb-4f98-9e5e-7072d9a18d22",
+ "metadata": {},
+ "source": [
+ "Explain in 1-2 sentences why you need $\\TirName{EvalPlusString\\(1\\)}$\n",
+ "and $\\TirName{EvalPlusString\\(2\\)}$ together for interpreting string\n",
+ "concatenation like in JavaScript.\n",
+ "\n",
+ "**Edit this cell:**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "289081c1-65cf-4d54-a523-cbc77103f62f",
+ "metadata": {},
+ "source": [
+ "Both $EvalPlusSTring1$ and $EvalPlusString2$ are required to handle different cases of string concatenation. $EvalPlusString1$ handles the cases where $e_1$ is a string and $e_2$ is coerced into one string, while $EvalPlusString2$ handles when both $e_1$ and $e_2$ are already a string."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3a977bdd-a125-4411-ab29-ae1f94fd8db5",
+ "metadata": {},
+ "source": [
+ "#### Notes\n",
+ "\n",
+ "You may give the rule in LaTeX math or as plain text (ascii art)\n",
+ "approximating the math rendering. For example,\n",
+ "\n",
+ " EvalPlusString1\n",
+ " E |- e1 vv str1 E |- e2 vv v2 v2 ~~> str2\n",
+ " -----------------------------------------------\n",
+ " E |- e1 + e2 vv str1 str2\n",
+ "\n",
+ "The LaTeX code for the rendered $\\TirName{EvalPlusString1}$ rule above\n",
+ "is as follows:\n",
+ "\n",
+ "``` latex\n",
+ "\\inferrule[EvalPlusString1]{\n",
+ " E \\vdash e_1 \\Downarrow \\mathit{str}_1\n",
+ " \\and\n",
+ " E \\vdash e_2 \\Downarrow v_2\n",
+ " \\and\n",
+ " v_2 \\rightsquigarrow \\mathit{str}_2\n",
+ "}{\n",
+ " E \\vdash e_1 \\mathbin{\\texttt{+}} e_2 \\Downarrow \\mathit{str}_1 \\mathit{str}_2\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "### Functions\n",
+ "\n",
+ "The inference rule defining evaluation of a function call (that\n",
+ "accidentally results in dynamic scoping) is as follows:\n",
+ "\n",
+ "$\\inferrule[EvalCall]{\n",
+ " E \\vdash e_1 \\Downarrow \\texttt{(}x\\texttt{)} \\mathrel{\\texttt{=}\\!\\texttt{>}} e' \n",
+ " \\and\n",
+ " E \\vdash e_2 \\Downarrow v_2 \n",
+ " \\and\n",
+ " E[\\mathop{}x \\mapsto v_2] \\vdash e' \\Downarrow v' \n",
+ "}{\n",
+ " E \\vdash \n",
+ " e_1\\texttt{(}e_2\\texttt{)}\n",
+ " \\Downarrow v' \n",
+ "}$\n",
+ "\n",
+ "Figure 2\n",
+ "\n",
+ "<span class=\"theorem-title\">**Exercise 4 (5 points)**</span> To continue\n",
+ "this warm up and guide our implementation of these inference rules,\n",
+ "write out what $\\TirName{EvalCall}$ is stating.\n",
+ "\n",
+ "***Edit this cell:***"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6127815b-8e44-4a33-bf84-56eae852758b",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "markdown",
+ "checksum": "b72bfbe88815393d455a52eb4aeeedfb",
+ "grade": true,
+ "grade_id": "3_3",
+ "locked": false,
+ "points": 5,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "source": [
+ "$\\TirName{EvalCall}$ shows how to evaluate a function $e_1$, where $e_2$ is an argument passed to the function. First, we evaluate $e_2$ to a value $v_2$, and substitute the function's parameter $x$ with $v_2$ in the function $e_1$."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "32627385-8210-4977-9b4c-ffd188150e6b",
+ "metadata": {},
+ "source": [
+ "## Implementing from Inference Rules\n",
+ "\n",
+ "### Abstract Syntax\n",
+ "\n",
+ "In the following, we build up to implementing an `eval` function:\n",
+ "\n",
+ "``` scala\n",
+ "def eval(env: Env, e: Expr): Expr\n",
+ "```\n",
+ "\n",
+ "This `eval` function directly corresponds the the evaluation judgment:\n",
+ "$E \\vdash e \\Downarrow v$, which is the operational semantics defined\n",
+ "above. It takes as input a value environment $E$ and an expression $e$\n",
+ "and returns a value $v$.\n",
+ "\n",
+ "Below is the `Expr` type defining our abstract syntax tree in Scala. If\n",
+ "you haven’t already, switch back to the Scala kernel and then run the\n",
+ "two cells below."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "29c86c42",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mtrait\u001b[39m \u001b[36mExpr\u001b[39m\n",
+ "defined \u001b[32mclass\u001b[39m \u001b[36mVar\u001b[39m\n",
+ "defined \u001b[32mclass\u001b[39m \u001b[36mConstDecl\u001b[39m\n",
+ "defined \u001b[32mclass\u001b[39m \u001b[36mN\u001b[39m\n",
+ "defined \u001b[32mclass\u001b[39m \u001b[36mB\u001b[39m\n",
+ "defined \u001b[32mtrait\u001b[39m \u001b[36mBop\u001b[39m\n",
+ "defined \u001b[32mclass\u001b[39m \u001b[36mBinary\u001b[39m\n",
+ "defined \u001b[32mobject\u001b[39m \u001b[36mPlus\u001b[39m\n",
+ "defined \u001b[32mobject\u001b[39m \u001b[36mEq\u001b[39m\n",
+ "defined \u001b[32mobject\u001b[39m \u001b[36mNe\u001b[39m\n",
+ "defined \u001b[32mclass\u001b[39m \u001b[36mIf\u001b[39m\n",
+ "defined \u001b[32mclass\u001b[39m \u001b[36mFun\u001b[39m\n",
+ "defined \u001b[32mclass\u001b[39m \u001b[36mCall\u001b[39m"
+ ]
+ },
+ "execution_count": 3,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "trait Expr // e ::=\n",
+ "\n",
+ "case class Var(x: String) extends Expr // e ::= x\n",
+ "case class ConstDecl(x: String, e1: Expr, e2: Expr) extends Expr // e ::= const x = e1; e2\n",
+ "\n",
+ "case class N(n: Double) extends Expr // e ::= n\n",
+ "case class B(b: Boolean) extends Expr // e ::= b\n",
+ "\n",
+ "trait Bop // bop ::=\n",
+ "case class Binary(bop: Bop, e1: Expr, e2: Expr) extends Expr // e ::= e1 bop b2\n",
+ "\n",
+ "case object Plus extends Bop // bop ::= +\n",
+ "case object Eq extends Bop // bop ::= ===\n",
+ "case object Ne extends Bop // bop ::= !==\n",
+ "\n",
+ "case class If(e1: Expr, e2: Expr, e3: Expr) extends Expr // e ::= e1 ? e2 : e3\n",
+ "\n",
+ "case class Fun(x: String, e1: Expr) extends Expr // e ::= (x) => e1\n",
+ "case class Call(e1: Expr, e2: Expr) extends Expr // e ::= e1(e2)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c893fd56-f5f0-4023-b032-3b1edf5af9d2",
+ "metadata": {},
+ "source": [
+ "Numbers $n$, booleans $b$, and functions\n",
+ "$\\texttt{(}x\\texttt{)} \\mathrel{\\texttt{=}\\!\\texttt{>}} e_1$ are values,\n",
+ "and we represent a value environment $E$ as a `Map[String, Expr]`:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "7ad1df6c",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36misValue\u001b[39m\n",
+ "defined \u001b[32mtype\u001b[39m \u001b[36mEnv\u001b[39m\n",
+ "\u001b[36mempty\u001b[39m: \u001b[32mEnv\u001b[39m = \u001b[33mMap\u001b[39m()\n",
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mlookup\u001b[39m\n",
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mextend\u001b[39m"
+ ]
+ },
+ "execution_count": 4,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def isValue(e: Expr): Boolean = e match {\n",
+ " case N(_) | B(_) | Fun(_, _) => true\n",
+ " case _ => false\n",
+ "}\n",
+ "\n",
+ "type Env = Map[String, Expr]\n",
+ "val empty: Env = Map()\n",
+ "def lookup(env: Env, x: String): Expr = env(x)\n",
+ "def extend(env: Env, x: String, v: Expr): Env = {\n",
+ " require(isValue(v))\n",
+ " env + (x -> v)\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "09e8e0df-3209-49f6-85d0-03bae67db850",
+ "metadata": {},
+ "source": [
+ "<span class=\"theorem-title\">**Exercise 5 (5 points)**</span> Now that we\n",
+ "have the AST type `Expr` defined, take your JavaScripty test program\n",
+ "from\n",
+ "<a href=\"#exr-dynamic-scoping-test\" class=\"quarto-xref\">Exercise 1</a>\n",
+ "and write out the AST it would parse to. This will serve as a test for\n",
+ "your implementation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "4499a45e",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "d6ce40f7aa1f7978936bcf6b0553fc21",
+ "grade": false,
+ "grade_id": "dynamic-scoping-test-ast",
+ "locked": false,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mtestAST\u001b[39m"
+ ]
+ },
+ "execution_count": 5,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def testAST: Expr = ConstDecl(\n",
+ " \"x\",\n",
+ " N(10),\n",
+ " ConstDecl(\n",
+ " \"f\",\n",
+ " Fun(\n",
+ " \"a\",\n",
+ " Binary(Plus, Var(\"x\"), Var(\"a\"))\n",
+ " ),\n",
+ " ConstDecl(\n",
+ " \"g\",\n",
+ " Fun(\n",
+ " \"b\",\n",
+ " ConstDecl(\n",
+ " \"x\",\n",
+ " N(20),\n",
+ " Call(Var(\"f\"), Var(\"b\"))\n",
+ " )\n",
+ " ),\n",
+ " Call(Var(\"g\"), N(-1))\n",
+ " )\n",
+ " )\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bb6ee383-7fee-40a8-9a5a-aec145db17c5",
+ "metadata": {},
+ "source": [
+ "#### Notes\n",
+ "\n",
+ "- Recall the difference between concrete and abstract syntax. Your AST\n",
+ " here will use the abstract syntax and be of type `Expr` defined\n",
+ " above. Therefore, the AST nodes you write in can only be\n",
+ " constructors of `Expr`. For example, the $\\mathbf{return}$ keyword\n",
+ " is in the concrete syntax but not a constructor of `Expr`.\n",
+ "\n",
+ "### Variables, Numbers, and Booleans\n",
+ "\n",
+ "<span class=\"theorem-title\">**Exercise 6 (10 points)**</span> Implement\n",
+ "`eval` the evaluation judgment form $E \\vdash e \\Downarrow v$ for all\n",
+ "rules except $\\TirName{EvalCall}$ shown in\n",
+ "<a href=\"#fig-javascripty-numbers-booleans-variables-limited\"\n",
+ "class=\"quarto-xref\">Figure 1</a>. It should be noted that this\n",
+ "implementation should very similar to your implementation of `eval` in\n",
+ "previous lab."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "9210e992",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "0a25c2c80a37b211092a4a7e7596a6c0",
+ "grade": false,
+ "grade_id": "eval-numbers-booleans-variables-answer",
+ "locked": false,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36meval\u001b[39m"
+ ]
+ },
+ "execution_count": 6,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def eval(env: Env, e: Expr): Expr = e match {\n",
+ " // EvalVar\n",
+ " case Var(x) => env(x)\n",
+ "\n",
+ " // EvalConstDecl\n",
+ " case ConstDecl(x, e1, e2) =>\n",
+ " val v1 = eval(env, e1)\n",
+ " val env2 = extend(env, x, v1)\n",
+ " eval(env2, e2)\n",
+ "\n",
+ " // EvalVal\n",
+ " case v if isValue(v) => v\n",
+ "\n",
+ " // EvalPlusNumber\n",
+ " case Binary(Plus, e1, e2) =>\n",
+ " val N(n1) = eval(env, e1)\n",
+ " val N(n2) = eval(env, e2)\n",
+ " N(n1 + n2)\n",
+ "\n",
+ " // EvalEquality\n",
+ " case Binary(bop, e1, e2) =>\n",
+ " val v1 = eval(env, e1)\n",
+ " val v2 = eval(env, e2)\n",
+ " bop match {\n",
+ " case Eq => B(v1 == v2)\n",
+ " case Ne => B(v1 != v2)\n",
+ " }\n",
+ "\n",
+ " // EvalIfTrue\n",
+ " // EvalIfFalse\n",
+ " case If(e1, e2, e3) => \n",
+ " eval(env, e1) match {\n",
+ " case B(true) => eval(env, e2)\n",
+ " case B(false) => eval(env, e3)\n",
+ " }\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "13e78183-7978-4bb2-8d93-a0ca52b443d3",
+ "metadata": {},
+ "source": [
+ "#### Notes\n",
+ "\n",
+ "- It is most beneficial to first implement `eval` from scratch by\n",
+ " referencing the rules shown in\n",
+ " <a href=\"#fig-javascripty-numbers-booleans-variables-limited\"\n",
+ " class=\"quarto-xref\">Figure 1</a>.\n",
+ "- After you implement `eval` here by following the rules, it may then\n",
+ " be informative to compare with your implementation from the previous\n",
+ " lab (that was for a larger language and with implicit type\n",
+ " coercions).\n",
+ "- You will have unmatched cases (i.e., there are no corresponding\n",
+ " rules), which you can leave unimplemented with `???` or the\n",
+ " potential for `MatchError`.\n",
+ "\n",
+ "#### Tests"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "fb80bad7",
+ "metadata": {
+ "deletable": false,
+ "editable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "3695ddb83d88e8f4d480e36ccd180022",
+ "grade": true,
+ "grade_id": "eval-numbers-booleans-tester",
+ "locked": true,
+ "points": 10,
+ "schema_version": 3,
+ "solution": false,
+ "task": false
+ }
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\u001b[36mRun starting. Expected test count is: 8\u001b[0m\n",
+ "\u001b[32mcmd7$Helper$$anon$1:\u001b[0m\n",
+ "\u001b[32m- should Test Case 1\u001b[0m\n",
+ "\u001b[32m- should Test Case 2\u001b[0m\n",
+ "\u001b[32m- should Test Case 3\u001b[0m\n",
+ "\u001b[32m- should Test Case 4\u001b[0m\n",
+ "\u001b[32m- should Test Case 5\u001b[0m\n",
+ "\u001b[32m- should Test Case 6\u001b[0m\n",
+ "\u001b[32m- should Test Case 7\u001b[0m\n",
+ "\u001b[32m- should Test Case 8\u001b[0m\n",
+ "\u001b[36mRun completed in 65 milliseconds.\u001b[0m\n",
+ "\u001b[36mTotal number of tests run: 8\u001b[0m\n",
+ "\u001b[36mSuites: completed 1, aborted 0\u001b[0m\n",
+ "\u001b[36mTests: succeeded 8, failed 0, canceled 0, ignored 0, pending 0\u001b[0m\n",
+ "\u001b[32mAll tests passed.\u001b[0m\n",
+ "*** 🎉 Tests Passed (10 points) ***\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mevalTest\u001b[39m"
+ ]
+ },
+ "execution_count": 7,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def evalTest(e: Expr): Expr = {\n",
+ " eval(empty, e)\n",
+ "}\n",
+ "val testExercise3 = test(new AnyFlatSpec {\n",
+ " val testcase1 = {\n",
+ " val e1 = N(1)\n",
+ " val e2 = N(2)\n",
+ " evalTest(Binary(Plus, e1, e2))\n",
+ " }\n",
+ " val testcase2 = {\n",
+ " val e1 = N(5)\n",
+ " val e2 = N(5)\n",
+ " evalTest(Binary(Eq, e1, e2)) \n",
+ " }\n",
+ " val testcase3 = {\n",
+ " val e1 = N(5)\n",
+ " val e2 = N(7)\n",
+ " evalTest(Binary(Eq, e1, e2))\n",
+ " }\n",
+ " val testcase4 = {\n",
+ " val e1 = N(5)\n",
+ " val e2 = N(7)\n",
+ " evalTest(Binary(Ne, e1, e2))\n",
+ " }\n",
+ " val testcase5 = {\n",
+ " val e1 = N(5)\n",
+ " val e2 = N(5)\n",
+ " evalTest(Binary(Ne, e1, e2))\n",
+ " }\n",
+ " val testcase6 = {\n",
+ " val e1 = N(3)\n",
+ " val e2 = Binary(Plus, Var(\"x\"), N(1))\n",
+ " evalTest(ConstDecl(\"x\", e1, e2)) \n",
+ " }\n",
+ " val testcase7 = {\n",
+ " val e1 = Binary(Plus, N(3), N(2))\n",
+ " val e2 = Binary(Plus, N(1), N(1))\n",
+ " evalTest(If(B(true), e1, e2)) \n",
+ " }\n",
+ " val testcase8 = {\n",
+ " val e1 = Binary(Plus, N(3), N(2))\n",
+ " val e2 = Binary(Plus, N(1), N(1))\n",
+ " evalTest(If(B(false), e1, e2)) \n",
+ "\n",
+ " }\n",
+ " \n",
+ " it should \"Test Case 1\" in { assertResult( N(3) ) { testcase1 } }\n",
+ " it should \"Test Case 2\" in { assertResult( B(true) ) { testcase2 } }\n",
+ " it should \"Test Case 3\" in { assertResult( B(false) ) { testcase3 } }\n",
+ " it should \"Test Case 4\" in { assertResult( B(true) ) { testcase4 } }\n",
+ " it should \"Test Case 5\" in { assertResult( B(false) ) { testcase5 } }\n",
+ " it should \"Test Case 6\" in { assertResult( N(4) ) { testcase6 } }\n",
+ " it should \"Test Case 7\" in { assertResult( N(5) ) { testcase7 } }\n",
+ " it should \"Test Case 8\" in { assertResult( N(2) ) { testcase8 } }\n",
+ "}, 10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0c8c160e-7429-46c4-91b0-be6c868100e4",
+ "metadata": {},
+ "source": [
+ "### Functions\n",
+ "\n",
+ "<span class=\"theorem-title\">**Exercise 7 (10 points)**</span> Extend\n",
+ "your implementation with functions. On function calls, you need to\n",
+ "extend the environment for the formal parameter. Begin with what you\n",
+ "have from <a href=\"#exr-eval-numbers-booleans-variables\"\n",
+ "class=\"quarto-xref\">Exercise 6</a>."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "088b0ba2",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "98be44a3e7d41eca8854a8bccd400d3a",
+ "grade": false,
+ "grade_id": "eval-functions-answer",
+ "locked": false,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36meval\u001b[39m"
+ ]
+ },
+ "execution_count": 8,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def eval(env: Env, e: Expr): Expr = e match {\n",
+ " // EvalVar\n",
+ " case Var(x) => env(x)\n",
+ "\n",
+ " // EvalConstDecl\n",
+ " case ConstDecl(x, e1, e2) =>\n",
+ " val v1 = eval(env, e1)\n",
+ " val env2 = extend(env, x, v1)\n",
+ " eval(env2, e2)\n",
+ "\n",
+ " // EvalVal\n",
+ " case v if isValue(v) => v\n",
+ "\n",
+ " // EvalPlusNumber\n",
+ " case Binary(Plus, e1, e2) =>\n",
+ " val N(n1) = eval(env, e1)\n",
+ " val N(n2) = eval(env, e2)\n",
+ " N(n1 + n2)\n",
+ "\n",
+ " // EvalEquality\n",
+ " case Binary(bop, e1, e2) =>\n",
+ " val v1 = eval(env, e1)\n",
+ " val v2 = eval(env, e2)\n",
+ " bop match {\n",
+ " case Eq => B(v1 == v2)\n",
+ " case Ne => B(v1 != v2)\n",
+ " }\n",
+ "\n",
+ " // EvalIfTrue\n",
+ " // EvalIfFalse\n",
+ " case If(e1, e2, e3) => \n",
+ " eval(env, e1) match {\n",
+ " case B(true) => eval(env, e2)\n",
+ " case B(false) => eval(env, e3)\n",
+ " }\n",
+ "\n",
+ " case Fun(x, e1) => Fun(x, e1)\n",
+ "\n",
+ " // EvalCall\n",
+ " case Call(e1, e2) => \n",
+ " eval(env, e1) match {\n",
+ " case Fun(x, e_) => \n",
+ " val v2 = eval(env, e2)\n",
+ " eval(extend(env, x, v2),e_)\n",
+ " }\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4f776771-a0e1-4391-a514-ff044aadc445",
+ "metadata": {},
+ "source": [
+ "#### Notes\n",
+ "\n",
+ "- This question is asking you to implement $\\TirName{EvalCall}$.\n",
+ "- Do not worry yet about dynamic type errors, so this will still have\n",
+ " some `???`s or have the possibility of `MatchError`s.\n",
+ "\n",
+ "#### Tests"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "id": "1d49dca6",
+ "metadata": {
+ "deletable": false,
+ "editable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "ebf02bf658d2c9645b2d1df72d32d49a",
+ "grade": true,
+ "grade_id": "eval-functions-tester",
+ "locked": true,
+ "points": 10,
+ "schema_version": 3,
+ "solution": false,
+ "task": false
+ }
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\u001b[36mRun starting. Expected test count is: 9\u001b[0m\n",
+ "\u001b[32mcmd9$Helper$$anon$1:\u001b[0m\n",
+ "\u001b[32m- should Test Case 1\u001b[0m\n",
+ "\u001b[32m- should Test Case 2\u001b[0m\n",
+ "\u001b[32m- should Test Case 3\u001b[0m\n",
+ "\u001b[32m- should Test Case 4\u001b[0m\n",
+ "\u001b[32m- should Test Case 5\u001b[0m\n",
+ "\u001b[32m- should Test Case 6\u001b[0m\n",
+ "\u001b[32m- should Test Case 7\u001b[0m\n",
+ "\u001b[32m- should Test Case 8\u001b[0m\n",
+ "\u001b[32m- should Test Case 9\u001b[0m\n",
+ "\u001b[36mRun completed in 4 milliseconds.\u001b[0m\n",
+ "\u001b[36mTotal number of tests run: 9\u001b[0m\n",
+ "\u001b[36mSuites: completed 1, aborted 0\u001b[0m\n",
+ "\u001b[36mTests: succeeded 9, failed 0, canceled 0, ignored 0, pending 0\u001b[0m\n",
+ "\u001b[32mAll tests passed.\u001b[0m\n",
+ "*** 🎉 Tests Passed (10 points) ***\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mevalTest\u001b[39m"
+ ]
+ },
+ "execution_count": 9,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def evalTest(e: Expr): Expr = {\n",
+ " eval(empty, e)\n",
+ "}\n",
+ "test(new AnyFlatSpec {\n",
+ " val testcase1 = {\n",
+ " val x = \"x\"\n",
+ " val e1 = Fun(x, Binary(Plus, Var(x), N(1)))\n",
+ " val e2 = N(2)\n",
+ " evalTest(Call(e1, e2))\n",
+ " }\n",
+ " val testcase2 = {\n",
+ " val x = \"x\"\n",
+ " val e1 = Fun(x, If(Binary(Eq, Var(x), N(0)), N(1), N(0)))\n",
+ " val e2 = N(0)\n",
+ " evalTest(Call(e1, e2))\n",
+ " }\n",
+ " val testcase3 = {\n",
+ " val x = \"x\"\n",
+ " val e1 = Fun(x, If(Binary(Eq, Var(x), N(0)), N(1), N(0)))\n",
+ " val e2 = N(1)\n",
+ " evalTest(Call(e1, e2))\n",
+ " }\n",
+ " val testcase4 = {\n",
+ " val x = \"x\"\n",
+ " val e1 = Fun(x, If(Binary(Ne, Var(x), N(0)), N(1), N(0)))\n",
+ " val e2 = N(1)\n",
+ " evalTest(Call(e1, e2))\n",
+ " }\n",
+ " val testcase5 = {\n",
+ " val a = \"a\"\n",
+ " val e1 = N(3)\n",
+ " val e2 = Binary(Plus, Var(\"x\"), N(1))\n",
+ " val e3 = ConstDecl(\"x\", e1, e2)\n",
+ " val e4 = Fun(a, e3)\n",
+ " evalTest(Call(e4, N(1)))\n",
+ " }\n",
+ " val testcase6 = {\n",
+ " val a = \"a\"\n",
+ " val e1 = Binary(Plus, N(3), Var(a))\n",
+ " val e2 = Binary(Plus, Var(\"x\"), N(1))\n",
+ " val e3 = ConstDecl(\"x\", e1, e2)\n",
+ " val e4 = Fun(a, e3)\n",
+ " evalTest(Call(e4, N(2)))\n",
+ " }\n",
+ " val testcase7 = {\n",
+ " val a = \"a\"\n",
+ " val f = \"f\"\n",
+ " val e1 = Binary(Plus, N(3), Var(a))\n",
+ " val e2 = Binary(Plus, Var(\"x\"), N(1))\n",
+ " val e3 = ConstDecl(\"x\", e1, e2)\n",
+ " val e4 = Fun(a, e3)\n",
+ " val e5 = ConstDecl(f, e4, Call(Var(f), N(2)))\n",
+ " evalTest(e5)\n",
+ " }\n",
+ " val testcase8 = {\n",
+ " val f = \"f\"\n",
+ " val a = \"a\"\n",
+ " val e1 = Binary(Plus, Var(a), Var(a))\n",
+ " val e2 = Fun(a, e1)\n",
+ " val e3 = Fun(f, Call(Var(f), N(5)))\n",
+ " evalTest(Call(e3, e2))\n",
+ " }\n",
+ " val testcase9 = {\n",
+ " val a1 = \"a1\"\n",
+ " val a2 = \"a2\"\n",
+ " val x = \"x\"\n",
+ " val e1 = Fun(a1, Var(x))\n",
+ " val e2 = ConstDecl(x, N(20), Call(e1, N(-1))) \n",
+ " val e3 = Fun(a2, e2)\n",
+ " val e4 = ConstDecl(x, N(10), Call(e3, N(-1)))\n",
+ " evalTest(e4)\n",
+ " }\n",
+ "\n",
+ " it should \"Test Case 1\" in { assertResult( N(3) ) { testcase1 } }\n",
+ " it should \"Test Case 2\" in { assertResult( N(1) ) { testcase2 } }\n",
+ " it should \"Test Case 3\" in { assertResult( N(0) ) { testcase3 } }\n",
+ " it should \"Test Case 4\" in { assertResult( N(1) ) { testcase4 } }\n",
+ " it should \"Test Case 5\" in { assertResult( N(4) ) { testcase5 } }\n",
+ " it should \"Test Case 6\" in { assertResult( N(6) ) { testcase6 } }\n",
+ " it should \"Test Case 7\" in { assertResult( N(6) ) { testcase7 } }\n",
+ " it should \"Test Case 8\" in { assertResult( N(10) ) { testcase8 } }\n",
+ " it should \"Test Case 9\" in { assertResult( N(20) ) { testcase9 } }\n",
+ " \n",
+ "}, 10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ecbbe931-a561-4031-a057-ab565507f11b",
+ "metadata": {},
+ "source": [
+ "### Dynamic Typing\n",
+ "\n",
+ "In the previous lab, all expressions could be evaluated to something\n",
+ "(because of conversions). With functions, we encounter one of the very\n",
+ "few run-time errors in JavaScript: trying to call something that is not\n",
+ "a function. In JavaScript and in JavaScripty, calling a non-function\n",
+ "raises a run-time error. Such a run-time error is known as a dynamic\n",
+ "type error. Languages are called *dynamically typed* when they allow all\n",
+ "syntactically valid programs to run and check for type errors during\n",
+ "execution.\n",
+ "\n",
+ "We define a Scala exception"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "id": "7150ac39",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mclass\u001b[39m \u001b[36mDynamicTypeError\u001b[39m"
+ ]
+ },
+ "execution_count": 10,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "case class DynamicTypeError(e: Expr) extends Exception {\n",
+ " override def toString = s\"TypeError: in expression $e\"\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1a1008b1-c83d-4604-a59d-e1c0f53c816c",
+ "metadata": {},
+ "source": [
+ "to signal this case. In other words, when your interpreter discovers a\n",
+ "dynamic type error, it should throw this exception using the following\n",
+ "Scala code:\n",
+ "\n",
+ "``` scala\n",
+ "throw DynamicTypeError(e)\n",
+ "```\n",
+ "\n",
+ "The argument should be the input expression `e` to `eval` where the type\n",
+ "error was detected. That is, the expression where there is no possible\n",
+ "rule to continue. For example, in the case of calling a non-function,\n",
+ "the type error should be reported on the `Call` node and not any\n",
+ "sub-expression.\n",
+ "\n",
+ "<span class=\"theorem-title\">**Exercise 8 (10 points)**</span> Add\n",
+ "support for checking for all dynamic type errors. You should have no\n",
+ "possibility for a `MatchError` or a `NotImplementedError`. Start with\n",
+ "what you have from\n",
+ "<a href=\"#exr-eval-functions\" class=\"quarto-xref\">Exercise 7</a>."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "id": "8c104880",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "f4cd95ddb270a0c91ec29e3253cfcc38",
+ "grade": false,
+ "grade_id": "eval-dynamic-typing-answer",
+ "locked": false,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36meval\u001b[39m"
+ ]
+ },
+ "execution_count": 11,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def eval(env: Env, e: Expr): Expr = e match {\n",
+ " // EvalVar\n",
+ " case Var(x) => env(x)\n",
+ "\n",
+ " // EvalConstDecl\n",
+ " case ConstDecl(x, e1, e2) =>\n",
+ " val v1 = eval(env, e1)\n",
+ " val env2 = extend(env, x, v1)\n",
+ " eval(env2, e2)\n",
+ "\n",
+ " // EvalVal\n",
+ " case v if isValue(v) => v\n",
+ "\n",
+ " // EvalPlusNumber\n",
+ " case Binary(Plus, e1, e2) =>\n",
+ " (eval(env, e1), eval(env, e2)) match {\n",
+ " case (N(n1), N(n2)) => N(n1+ n2)\n",
+ " case _ => throw DynamicTypeError(e)\n",
+ " }\n",
+ "\n",
+ " // EvalEquality\n",
+ " case Binary(bop, e1, e2) =>\n",
+ " val v1 = eval(env, e1)\n",
+ " val v2 = eval(env, e2)\n",
+ " bop match {\n",
+ " case Eq => B(v1 == v2)\n",
+ " case Ne => B(v1 != v2)\n",
+ " case _ => throw DynamicTypeError(e)\n",
+ " }\n",
+ "\n",
+ " // EvalIfTrue\n",
+ " // EvalIfFalse\n",
+ " case If(e1, e2, e3) => \n",
+ " eval(env, e1) match {\n",
+ " case B(true) => eval(env, e2)\n",
+ " case B(false) => eval(env, e3)\n",
+ " case _ => throw DynamicTypeError(e)\n",
+ " }\n",
+ "\n",
+ " case Fun(x, e1) => Fun(x, e1)\n",
+ "\n",
+ " // EvalCall\n",
+ " case Call(e1, e2) => \n",
+ " eval(env, e1) match {\n",
+ " case Fun(x, e_) => \n",
+ " val v2 = eval(env, e2)\n",
+ " eval(extend(env, x, v2),e_)\n",
+ " case _ => throw DynamicTypeError(e)\n",
+ " }\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "01f6fc90-6a86-44b5-8003-60c3e881f115",
+ "metadata": {},
+ "source": [
+ "#### Tests"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "id": "672aa11d",
+ "metadata": {
+ "deletable": false,
+ "editable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "2828b9ff89630db9b066df03618b875d",
+ "grade": true,
+ "grade_id": "eval-dynamic-typing-tester",
+ "locked": true,
+ "points": 10,
+ "schema_version": 3,
+ "solution": false,
+ "task": false
+ }
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\u001b[36mRun starting. Expected test count is: 8\u001b[0m\n",
+ "\u001b[32mcmd12$Helper$$anon$1:\u001b[0m\n",
+ "\u001b[32m- should Test Case 1\u001b[0m\n",
+ "\u001b[32m- should Test Case 2\u001b[0m\n",
+ "\u001b[32m- should Test Case 3\u001b[0m\n",
+ "\u001b[32m- should Test Case 4\u001b[0m\n",
+ "\u001b[32m- should Test Case 5\u001b[0m\n",
+ "\u001b[32m- should Test Case 6\u001b[0m\n",
+ "\u001b[32m- should Test Case 7\u001b[0m\n",
+ "\u001b[32m- should Test Case 8\u001b[0m\n",
+ "\u001b[36mRun completed in 5 milliseconds.\u001b[0m\n",
+ "\u001b[36mTotal number of tests run: 8\u001b[0m\n",
+ "\u001b[36mSuites: completed 1, aborted 0\u001b[0m\n",
+ "\u001b[36mTests: succeeded 8, failed 0, canceled 0, ignored 0, pending 0\u001b[0m\n",
+ "\u001b[32mAll tests passed.\u001b[0m\n",
+ "*** 🎉 Tests Passed (10 points) ***\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mevalTest\u001b[39m"
+ ]
+ },
+ "execution_count": 12,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def evalTest(e: Expr): Expr = {\n",
+ " eval(empty, e)\n",
+ "}\n",
+ "test(new AnyFlatSpec {\n",
+ " val testcase1 = {\n",
+ " val e1 = Binary(Plus, N(3), N(2))\n",
+ " val e2 = Binary(Plus, N(1), N(1))\n",
+ " evalTest(If(B(true), e1, e2)) \n",
+ " }\n",
+ " val testcase2 = {\n",
+ " val e1 = Binary(Plus, N(3), N(2))\n",
+ " val e2 = Binary(Plus, N(1), N(1))\n",
+ " evalTest(If(B(false), e1, e2)) \n",
+ "\n",
+ " }\n",
+ " val testcase3 = {\n",
+ " val a = \"a\"\n",
+ " val f = \"f\"\n",
+ " val e1 = Binary(Plus, N(3), Var(a))\n",
+ " val e2 = Binary(Plus, Var(\"x\"), N(1))\n",
+ " val e3 = ConstDecl(\"x\", e1, e2)\n",
+ " val e4 = Fun(a, e3)\n",
+ " val e5 = ConstDecl(f, e4, Call(Var(f), N(2)))\n",
+ " evalTest(e5)\n",
+ " }\n",
+ " val testcase4 = {\n",
+ " val f = \"f\"\n",
+ " val a = \"a\"\n",
+ " val e1 = Binary(Plus, Var(a), Var(a))\n",
+ " val e2 = Fun(a, e1)\n",
+ " val e3 = Fun(f, Call(Var(f), N(5)))\n",
+ " evalTest(Call(e3, e2))\n",
+ " }\n",
+ " val testcase5 = {\n",
+ " val a1 = \"a1\"\n",
+ " val a2 = \"a2\"\n",
+ " val x = \"x\"\n",
+ " val e1 = Fun(a1, Var(x))\n",
+ " val e2 = ConstDecl(x, N(20), Call(e1, N(-1))) \n",
+ " val e3 = Fun(a2, e2)\n",
+ " val e4 = ConstDecl(x, N(10), Call(e3, N(-1)))\n",
+ " evalTest(e4)\n",
+ " }\n",
+ " val testcase6 = {\n",
+ " Binary(Plus, N(4), B(true))\n",
+ " }\n",
+ " val testcase7 = {\n",
+ " Call(N(4), N(2))\n",
+ " }\n",
+ " val testcase8 = {\n",
+ " Call(N(4), B(true))\n",
+ " }\n",
+ " \n",
+ " it should \"Test Case 1\" in { assertResult( N(5) ) { testcase1 } }\n",
+ " it should \"Test Case 2\" in { assertResult( N(2) ) { testcase2 } }\n",
+ " it should \"Test Case 3\" in { assertResult( N(6) ) { testcase3 } }\n",
+ " it should \"Test Case 4\" in { assertResult( N(10) ) { testcase4 } }\n",
+ " it should \"Test Case 5\" in { assertResult( N(20) ) { testcase5 } }\n",
+ " it should \"Test Case 6\" in { intercept[DynamicTypeError] {evalTest(testcase6)} }\n",
+ " it should \"Test Case 7\" in { intercept[DynamicTypeError] {evalTest(testcase7)} }\n",
+ " it should \"Test Case 8\" in { intercept[DynamicTypeError] {evalTest(testcase8)} }\n",
+ "}, 10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8190241b-e967-44a0-a78a-37f727b85dee",
+ "metadata": {},
+ "source": [
+ "### Dynamic Scoping\n",
+ "\n",
+ "<span class=\"theorem-title\">**Exercise 9 (5 points)**</span> Below is a\n",
+ "cell that runs the AST from the test case you wrote in\n",
+ "<a href=\"#exr-dynamic-scoping-test-ast\"\n",
+ "class=\"quarto-xref\">Exercise 5</a> with your interpreter implementation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "cdbae7a0",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mevalTest\u001b[39m\n",
+ "\u001b[36mres13_1\u001b[39m: \u001b[32mExpr\u001b[39m = \u001b[33mN\u001b[39m(n = \u001b[32m19.0\u001b[39m)"
+ ]
+ },
+ "execution_count": 13,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def evalTest(e: Expr): Expr = {\n",
+ " eval(empty, e)\n",
+ "}\n",
+ "evalTest(testAST)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6845ef4c-e83b-4097-94a5-036e0bf8ae6f",
+ "metadata": {},
+ "source": [
+ "Does it evaluate to what your excepted? The evaluation output above\n",
+ "should be different from what it would evaluate to with a JavaScript\n",
+ "interpreter, such as Deno. Ensure that the results are different and\n",
+ "write below what each interpreter evaluates to.\n",
+ "\n",
+ "***Edit this cell:***"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "15aabbfc-2b58-4493-bf6a-45427c5f558a",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "markdown",
+ "checksum": "385ebf880935ca1bad2f2af35b2a3c98",
+ "grade": true,
+ "grade_id": "3_8_1",
+ "locked": false,
+ "points": 1,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "source": [
+ "Yes, the value calcuated by the Scala AST implemented in exercise 5 differs from the JavaScript function implemented in exercise 1."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "18749b90-2bc0-401e-b6d0-c223e32ab4f6",
+ "metadata": {},
+ "source": [
+ "It seems like we implemented dynamic scoping instead of static scoping.\n",
+ "Explain the failed test case and how your interpreter behaves\n",
+ "differently compared to a JavaScript interpreter. Furthermore, think\n",
+ "about why this is the case and explain in 1-2 sentences *why* your\n",
+ "interpreter behaves differently.\n",
+ "\n",
+ "***Edit this cell:***"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0d4befe8-ffbf-4eda-be18-5a634ef592fb",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "markdown",
+ "checksum": "691ba88661c76c567302965e44648ff8",
+ "grade": true,
+ "grade_id": "3_8_2",
+ "locked": false,
+ "points": 4,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "source": [
+ "YOUR ANSWER HERE"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0f7201df-c431-4b7f-acad-998e2da4d029",
+ "metadata": {},
+ "source": [
+ "### Closures\n",
+ "\n",
+ "In order to fix our dynamic scoping issue, we will implement explicit\n",
+ "closures. That is, when functions are evaluated, they will use the value\n",
+ "environment in which they were defined.\n",
+ "\n",
+ "Here are the updates to our abstract syntax.\n",
+ "\n",
+ "$$\n",
+ "\\begin{array}{rrrll}\n",
+ "\\text{expressions} & e& \\mathrel{::=}&\n",
+ "\\texttt{(}x\\texttt{)} \\mathrel{\\texttt{=}\\!\\texttt{>}} e_1\n",
+ "\\mid e_1\\texttt{(}e_2\\texttt{)}\n",
+ "\\\\\n",
+ "\\text{values} & v& \\mathrel{::=}& \\texttt{(}x\\texttt{)} \\mathrel{\\texttt{=}\\!\\texttt{>}} e_1[E]\n",
+ "\\\\\n",
+ "\\text{variables} & x\n",
+ "\\end{array}\n",
+ "$$\n",
+ "\n",
+ "Notice that now, closures are values (and functions are not), while\n",
+ "functions are still expressions.\n",
+ "\n",
+ "We also add the $\\TirName{EvalFun}$ rule to our operational semantics,\n",
+ "and edit the $\\TirName{EvalCall}$ rule, seen below.\n",
+ "\n",
+ "$\\fbox{$E \\vdash e \\Downarrow v$}$\n",
+ "\n",
+ "$\\inferrule[EvalFun]{\n",
+ "}{\n",
+ " E \\vdash \n",
+ " \\texttt{(}x\\texttt{)} \\mathrel{\\texttt{=}\\!\\texttt{>}} e\n",
+ " \\Downarrow\n",
+ " \\texttt{(}x\\texttt{)} \\mathrel{\\texttt{=}\\!\\texttt{>}} e [ E]\n",
+ " {\n",
+ " }\n",
+ "}$\n",
+ "\n",
+ "$\\inferrule[EvalCall]{\n",
+ " E \\vdash e_1 \\Downarrow \\texttt{(}x\\texttt{)} \\mathrel{\\texttt{=}\\!\\texttt{>}} e'[E'] \n",
+ " \\and\n",
+ " E \\vdash e_2 \\Downarrow v_2 \n",
+ " \\and\n",
+ " E'[\\mathop{}x \\mapsto v_2] \\vdash e' \\Downarrow v' \n",
+ "}{\n",
+ " E \\vdash \n",
+ " e_1\\texttt{(}e_2\\texttt{)}\n",
+ " \\Downarrow v' \n",
+ "}$\n",
+ "\n",
+ "In order to implement this, we will add `Closure` to our `Expr` type,\n",
+ "and edit other helper functions as needed."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "id": "896b02a7",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mclass\u001b[39m \u001b[36mClosure\u001b[39m\n",
+ "defined \u001b[32mfunction\u001b[39m \u001b[36misValue\u001b[39m\n",
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mextend\u001b[39m"
+ ]
+ },
+ "execution_count": 14,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "case class Closure(fun: Fun, env: Env) extends Expr \n",
+ "def isValue(e: Expr): Boolean = e match {\n",
+ " case N(_) | B(_) | Closure(_, _) => true\n",
+ " case _ => false\n",
+ "}\n",
+ "def extend(env: Env, x: String, v: Expr): Env = {\n",
+ " require(isValue(v))\n",
+ " env + (x -> v)\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "740f737e-0e78-483e-88d5-a867e41f8251",
+ "metadata": {},
+ "source": [
+ "<span class=\"theorem-title\">**Exercise 10 (10 points)**</span> With the\n",
+ "above, implement a new version of `eval` that uses closures to enforce\n",
+ "static scoping. Begin with what you have from\n",
+ "<a href=\"#exr-eval-dynamic-typing\" class=\"quarto-xref\">Exercise 8</a>."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "id": "ab078f65",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "323612cac66795535f6a9e1e66b84bd8",
+ "grade": false,
+ "grade_id": "eval-closures-answer",
+ "locked": false,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36meval\u001b[39m"
+ ]
+ },
+ "execution_count": 18,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def eval(env: Env, e: Expr): Expr = e match {\n",
+ " // EvalVar\n",
+ " case Var(x) => env(x)\n",
+ "\n",
+ " // EvalConstDecl\n",
+ " case ConstDecl(x, e1, e2) =>\n",
+ " val v1 = eval(env, e1)\n",
+ " val env2 = extend(env, x, v1)\n",
+ " eval(env2, e2)\n",
+ "\n",
+ " // EvalVal\n",
+ " case v if isValue(v) => v\n",
+ "\n",
+ " // EvalPlusNumber\n",
+ " case Binary(Plus, e1, e2) =>\n",
+ " (eval(env, e1), eval(env, e2)) match {\n",
+ " case (N(n1), N(n2)) => N(n1+ n2)\n",
+ " case _ => throw DynamicTypeError(e)\n",
+ " }\n",
+ "\n",
+ " // EvalEquality\n",
+ " case Binary(bop, e1, e2) =>\n",
+ " val v1 = eval(env, e1)\n",
+ " val v2 = eval(env, e2)\n",
+ " (v1, v2) match {\n",
+ " case (B(b1), B(b2)) => bop match {\n",
+ " case Eq => B(b1 == b2)\n",
+ " case Ne => B(b1 != b2)\n",
+ " case _ => throw DynamicTypeError(e)\n",
+ " }\n",
+ " case (N(n1), N(n2)) => bop match {\n",
+ " case Eq => B(n1 == n2)\n",
+ " case Ne => B(n1 != n2)\n",
+ " case _ => throw DynamicTypeError(e)\n",
+ " }\n",
+ " case _ => throw DynamicTypeError(e)\n",
+ " }\n",
+ "\n",
+ " // EvalIfTrue\n",
+ " // EvalIfFalse\n",
+ " case If(e1, e2, e3) => \n",
+ " eval(env, e1) match {\n",
+ " case B(true) => eval(env, e2)\n",
+ " case B(false) => eval(env, e3)\n",
+ " case _ => throw DynamicTypeError(e)\n",
+ " }\n",
+ "\n",
+ " case Fun(x, e1) => Closure(Fun(x, e1), env)\n",
+ " \n",
+ " // EvalCall\n",
+ " case Call(e1, e2) => \n",
+ " eval(env, e1) match {\n",
+ " case Closure(Fun(x, e_), closureEnv) => \n",
+ " val v2 = eval(env, e2)\n",
+ " eval(extend(closureEnv, x, v2), e_)\n",
+ " case _ => throw DynamicTypeError(e)\n",
+ " }\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "14520cd2-7937-447a-9b26-9fd0807bd8ef",
+ "metadata": {},
+ "source": [
+ "#### Tests"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "id": "a57c1165",
+ "metadata": {
+ "deletable": false,
+ "editable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "6f434a07eee475c3bcb3ca595c71cbcf",
+ "grade": true,
+ "grade_id": "eval-closures-tester",
+ "locked": true,
+ "points": 10,
+ "schema_version": 3,
+ "solution": false,
+ "task": false
+ }
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\u001b[36mRun starting. Expected test count is: 6\u001b[0m\n",
+ "\u001b[32mcmd19$Helper$$anon$1:\u001b[0m\n",
+ "\u001b[32m- should Test Case 1\u001b[0m\n",
+ "\u001b[32m- should Test Case 2\u001b[0m\n",
+ "\u001b[32m- should Test Case 3\u001b[0m\n",
+ "\u001b[32m- should Test Case 4\u001b[0m\n",
+ "\u001b[32m- should Test Case 5\u001b[0m\n",
+ "\u001b[32m- should Test Case 6\u001b[0m\n",
+ "\u001b[36mRun completed in 5 milliseconds.\u001b[0m\n",
+ "\u001b[36mTotal number of tests run: 6\u001b[0m\n",
+ "\u001b[36mSuites: completed 1, aborted 0\u001b[0m\n",
+ "\u001b[36mTests: succeeded 6, failed 0, canceled 0, ignored 0, pending 0\u001b[0m\n",
+ "\u001b[32mAll tests passed.\u001b[0m\n",
+ "*** 🎉 Tests Passed (10 points) ***\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mevalTest\u001b[39m"
+ ]
+ },
+ "execution_count": 19,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def evalTest(e: Expr): Expr = {\n",
+ " eval(empty, e)\n",
+ "}\n",
+ "test(new AnyFlatSpec {\n",
+ " val testcase1 = {\n",
+ " val a = \"a\"\n",
+ " val f = \"f\"\n",
+ " val e1 = Binary(Plus, N(3), Var(a))\n",
+ " val e2 = Binary(Plus, Var(\"x\"), N(1))\n",
+ " val e3 = ConstDecl(\"x\", e1, e2)\n",
+ " val e4 = Fun(a, e3)\n",
+ " val e5 = ConstDecl(f, e4, Call(Var(f), N(2)))\n",
+ " evalTest(e5)\n",
+ " }\n",
+ " val testcase2 = {\n",
+ " val a1 = \"a1\"\n",
+ " val a2 = \"a2\"\n",
+ " val x = \"x\"\n",
+ " val e1 = Fun(a1, Var(x))\n",
+ " val e2 = ConstDecl(x, N(40), Call(e1, N(-1))) \n",
+ " val e3 = Fun(a2, e2)\n",
+ " val e4 = ConstDecl(x, N(10), Call(e3, N(-1)))\n",
+ " evalTest(e4)\n",
+ " }\n",
+ " val testcase3 = {\n",
+ " val f = \"f\"\n",
+ " val a = \"a\"\n",
+ " val e1 = Binary(Plus, Var(a), Var(a))\n",
+ " val e2 = Fun(a, e1)\n",
+ " val e3 = Fun(f, Call(e2, N(5)))\n",
+ " evalTest(Call(e3, e2))\n",
+ " }\n",
+ " val testcase4 = {\n",
+ " Call(N(4), B(true))\n",
+ " }\n",
+ " val testcase5 = {\n",
+ " Binary(Ne, B(false), Fun(\"a\", N(2)))\n",
+ " }\n",
+ " val testcase6 = {\n",
+ " evalTest(ConstDecl( \"x\", N(30), ConstDecl(\"f\", Fun(\"a\", Var(\"x\")), ConstDecl(\"g\", Fun( \"b\", ConstDecl(\"x\", N(20), Call(Var(\"f\"), N(-1)))),Call(Var(\"g\"), N(-1))))))\n",
+ " }\n",
+ " it should \"Test Case 1\" in { assertResult( N(6) ) { testcase1 } }\n",
+ " it should \"Test Case 2\" in { assertResult( N(40) ) { testcase2 } }\n",
+ " it should \"Test Case 3\" in { assertResult( N(10) ) { testcase3 } }\n",
+ " it should \"Test Case 4\" in { intercept[DynamicTypeError] {evalTest(testcase4)} }\n",
+ " it should \"Test Case 5\" in { intercept[DynamicTypeError] {evalTest(testcase5)} }\n",
+ " it should \"Test Case 6\" in { assertResult( N(30) ) { testcase6 } }\n",
+ "}, 10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e3845b97-e1d3-4000-863e-b8a89c9ee8f0",
+ "metadata": {},
+ "source": [
+ "This code tests your new implementation against the dynamic scoping test\n",
+ "case you wrote:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "id": "91685c58",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "defined \u001b[32mfunction\u001b[39m \u001b[36mevalTest\u001b[39m\n",
+ "\u001b[36mres17_1\u001b[39m: \u001b[32mExpr\u001b[39m = \u001b[33mN\u001b[39m(n = \u001b[32m9.0\u001b[39m)"
+ ]
+ },
+ "execution_count": 17,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "def evalTest(e: Expr): Expr = {\n",
+ " eval(empty, e)\n",
+ "}\n",
+ "evalTest(testAST)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "475e61c7-1151-4e8b-8657-70427db65b45",
+ "metadata": {},
+ "source": [
+ "If you’re implementation is correct, it should evaluate to what a\n",
+ "JavaScript interpreter evaluates it to.\n",
+ "\n",
+ "## Implementing Recursive Functions (Accelerated)\n",
+ "\n",
+ "The remaining exercises are for those who want to go deeper and take an\n",
+ "“accelerated” version of this course.\n",
+ "\n",
+ "We begin by extending our abstract syntax to allow for recursive\n",
+ "functions. To call a function within itself, we permit functions to have\n",
+ "a variable identifier to refer to itself. If the identifier is present,\n",
+ "then it can be used for recursion.\n",
+ "\n",
+ "$$\\begin{array}{rlrll}\n",
+ "\\text{expressions} & e& \\mathrel{::=}&\n",
+ "x^{?}\\texttt{(}y\\texttt{)} \\mathrel{\\texttt{=}\\!\\texttt{>}} e_1\n",
+ "\\mid e_1\\texttt{(}e_2\\texttt{)}\n",
+ "\\\\\n",
+ "\\text{optional variables} & x^{?}& \\mathrel{::=}& x\\mid\\varepsilon\n",
+ "\\\\\n",
+ "\\text{variables} & x\n",
+ "\\end{array}$$\n",
+ "\n",
+ "### Defining Inference Rules\n",
+ "\n",
+ "<span class=\"theorem-title\">**Exercise 11 (10 points)**</span> To allow\n",
+ "for recursion, at a function call, we must bind the function identifier\n",
+ "to the function value when evaluating the function body. Give a\n",
+ "inference rule called $\\TirName{EvalCallRec}$ that describes this\n",
+ "semantics.\n",
+ "\n",
+ "***Edit this cell:***"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ad4ad55f-d27a-4157-8f06-6aed2fd50e0a",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "markdown",
+ "checksum": "d306618c54a8f200a10728d1ea2b2305",
+ "grade": true,
+ "grade_id": "3_10",
+ "locked": false,
+ "points": 5,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "source": [
+ "YOUR ANSWER HERE"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1a99119f-80e5-42eb-891d-f1ec6b1d4d12",
+ "metadata": {},
+ "source": [
+ "### Writing a Test Case\n",
+ "\n",
+ "<span class=\"theorem-title\">**Exercise 12 (1 point)**</span> Write a\n",
+ "function `sumOneToN` that computes the sum from `1` to `n` using the\n",
+ "fragment of JavaScripty in this assignment.\n",
+ "\n",
+ "***Edit this cell:***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "667f4cf9-6405-4a29-8cf5-ee5b3ce50a16",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "0a8e3b9d2ad3e3621c5e8ee66a8170c4",
+ "grade": true,
+ "grade_id": "3_11_1",
+ "locked": false,
+ "points": 1,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "outputs": [],
+ "source": [
+ "function sumOneToN(n) \n",
+ " ???"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c6fe0d18-fcb6-4eb6-af25-e8372058523c",
+ "metadata": {},
+ "source": [
+ "In order to allow for recursive, we must edit our `Fun` constructor to\n",
+ "accept an optional variable. (We must also re-run the other constructor\n",
+ "and helper functions that rely on `Fun`.)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5bc4fe97",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "case class Fun(xopt: Option[String], y: String, e1: Expr) extends Expr \n",
+ "case class Closure(fun: Fun, env: Env) extends Expr \n",
+ "def isValue(e: Expr): Boolean = e match {\n",
+ " case N(_) | B(_) | Closure(_, _) => true\n",
+ " case _ => false\n",
+ "}\n",
+ "def extend(env: Env, x: String, v: Expr): Env = {\n",
+ " require(isValue(v))\n",
+ " env + (x -> v)\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b361c93d-5c0b-4cea-8515-d798783521ea",
+ "metadata": {},
+ "source": [
+ "<span class=\"theorem-title\">**Exercise 13 (4 points)**</span> Now that\n",
+ "we have an abstract syntax tree node to write recursive functions,\n",
+ "create an `Expr` that is a recursive function which computes the sum\n",
+ "from `1` to a parameter `n`. This will be used in a test case for an\n",
+ "updated version of `eval`. Write out the AST that your `sumOneToN`\n",
+ "function will be parsed to.\n",
+ "\n",
+ "***Edit this cell:***"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "142ec74a",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "0c1c0102d524a6597926587b45dfbd78",
+ "grade": true,
+ "grade_id": "recursive-functions-test-ast",
+ "locked": false,
+ "points": 4,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "outputs": [],
+ "source": [
+ "val recTestAST: Expr = \n",
+ " ???"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "52ce45fd-d258-4ced-aa1c-f1cf0812fc54",
+ "metadata": {},
+ "source": [
+ "<span class=\"theorem-title\">**Exercise 14 (10 points)**</span> Rewrite\n",
+ "your `eval` function to handle recursive functions."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "685ecd67",
+ "metadata": {
+ "deletable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "05f4d8036bb004554dd00637c099d8c1",
+ "grade": false,
+ "grade_id": "eval-recursive-functions-answer",
+ "locked": false,
+ "schema_version": 3,
+ "solution": true,
+ "task": false
+ }
+ },
+ "outputs": [],
+ "source": [
+ "def eval(env: Env, e: Expr): Expr = \n",
+ " ???"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "612e3300-ea45-4a6a-9567-d5d530620036",
+ "metadata": {},
+ "source": [
+ "This cell tests your implementation against your test case `sumOneToN`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a878557b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def evalTest(e: Expr): Expr = {\n",
+ " eval(empty, e)\n",
+ "}\n",
+ "test(new AnyFlatSpec {\n",
+ " val yourSumOneToN = evalTest(Call(recTestAST, N(10)))\n",
+ " it should \"Compute 1 to N of 10 correctly\" in { assertResult( N(55) ) { yourSumOneToN } }\n",
+ "}, 5)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e14f1826-f968-40ad-a96c-20a4e266da27",
+ "metadata": {},
+ "source": [
+ "#### Tests"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "cf27f9fb",
+ "metadata": {
+ "deletable": false,
+ "editable": false,
+ "nbgrader": {
+ "cell_type": "code",
+ "checksum": "433dad759b222d03c1d3f041cfe9a2f6",
+ "grade": true,
+ "grade_id": "evall-recursive-functions-tester",
+ "locked": true,
+ "points": 10,
+ "schema_version": 3,
+ "solution": false,
+ "task": false
+ }
+ },
+ "outputs": [],
+ "source": [
+ "def evalTest(e: Expr): Expr = {\n",
+ " eval(empty, e)\n",
+ "}\n",
+ "test(new AnyFlatSpec {\n",
+ " val testcase1 = {\n",
+ " val f = \"f\"\n",
+ " val x = \"x\"\n",
+ " val fbody = If(\n",
+ " Binary(Eq, Var(x), N(10)),\n",
+ " Var(x),\n",
+ " Binary(Plus, Var(x), Call(Var(f), Binary(Plus, Var(x), N(1))))\n",
+ " )\n",
+ " val e1 = Fun(Some(f), x, fbody)\n",
+ " val e2 = N(3) \n",
+ " evalTest(Call(e1, e2)) \n",
+ " }\n",
+ " val testcase2 = {\n",
+ " val a1 = \"a1\"\n",
+ " val a2 = \"a2\"\n",
+ " val x = \"x\"\n",
+ " val e1 = Fun(None, a1, Var(x))\n",
+ " val e2 = ConstDecl(x, N(40), Call(e1, N(-1))) \n",
+ " val e3 = Fun(None, a2, e2)\n",
+ " val e4 = ConstDecl(x, N(10), Call(e3, N(-1)))\n",
+ " evalTest(e4)\n",
+ " }\n",
+ " \n",
+ " it should \"Test Case 1\" in { assertResult( N(52) ) { testcase1 } }\n",
+ " it should \"Test Case 2\" in { assertResult( N(40) ) { testcase2 } }\n",
+ "\n",
+ "}, 5)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Scala",
+ "language": "scala",
+ "name": "scala"
+ },
+ "language_info": {
+ "codemirror_mode": "text/x-scala",
+ "file_extension": ".sc",
+ "mimetype": "text/x-scala",
+ "name": "scala",
+ "nbconvert_exporter": "script",
+ "version": "2.13.14"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}