{ "cells": [ { "cell_type": "markdown", "id": "92f4f706", "metadata": {}, "source": [ "# Time-dependent example\n", "\n", "This notebook describes the calculation of derivative information for a time-dependent problem using tlm_adjoint with the [Firedrake](https://firedrakeproject.org/) backend. Overheads associated with building the records of calculations are discussed, and a checkpointing schedule is applied.\n", "\n", "The binomial checkpointing schedule is based on the method described in:\n", "\n", "- Andreas Griewank and Andrea Walther, 'Algorithm 799: revolve: an implementation of checkpointing for the reverse or adjoint mode of computational differentiation', ACM Transactions on Mathematical Software, 26(1), pp. 19–45, 2000, doi: 10.1145/347837.347846\n", "\n", "## Forward problem\n", "\n", "We consider the solution of a linear time-dependent partial differential equation, followed by the calculation of the square of the $L^2$-norm of the final time solution. We assume real spaces and a real build of Firedrake throughout.\n", "\n", "Specifically we consider the advection-diffusion equation in two dimensions, in the form\n", "\n", "$$\n", " \\partial_t u + \\partial_x \\psi \\partial_y u - \\partial_y \\psi \\partial_x u = \\kappa \\left( \\partial_{xx} + \\partial_{yy} \\right) u,\n", "$$\n", "\n", "where $\\psi$ vanishes on the domain boundary, and subject to zero flux boundary conditions. We consider the spatial domain $\\left( x, y \\right) \\in \\left( 0, 1 \\right)^2$ and temporal domain $t \\in \\left[ 0, 0.1 \\right]$, with $\\psi \\left( x, y \\right) = -\\sin \\left( \\pi x \\right) \\sin \\left( \\pi y \\right)$ and $\\kappa = 0.01$, and an initial condition $u \\left( x, y, t=0 \\right) = \\exp \\left[ -50 \\left( \\left( x - 0.75 \\right)^2 + \\left( y - 0.5 \\right)^2 \\right) \\right]$.\n", "\n", "The problem is discretized using $P_1$ continuous finite elements to represent both the solution $u$ at each time level and the stream function $\\psi$. The problem is discretized in time using the implicit trapezoidal rule.\n", "\n", "A simple implementation in Firedrake takes the form:" ] }, { "cell_type": "code", "execution_count": null, "id": "b29607a8", "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "\n", "from firedrake import *\n", "from firedrake.pyplot import tricontourf\n", "\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "\n", "T = 0.1\n", "N = 100\n", "dt = Constant(T / N)\n", "\n", "mesh = UnitSquareMesh(128, 128)\n", "X = SpatialCoordinate(mesh)\n", "space = FunctionSpace(mesh, \"Lagrange\", 1)\n", "test = TestFunction(space)\n", "trial = TrialFunction(space)\n", "\n", "psi = Function(space, name=\"psi\")\n", "psi.interpolate(-sin(pi * X[0]) * sin(pi * X[1]))\n", "\n", "kappa = Constant(0.01)\n", "\n", "u_0 = Function(space, name=\"u_0\")\n", "u_0.interpolate(exp(-50.0 * ((X[0] - 0.75) ** 2 + (X[1] - 0.5) ** 2)))\n", "\n", "u_n = Function(space, name=\"u_n\")\n", "u_np1 = Function(space, name=\"u_np1\")\n", "\n", "u_h = 0.5 * (u_n + trial)\n", "F = (inner(trial - u_n, test) * dx\n", " + dt * inner(psi.dx(0) * u_h.dx(1) - psi.dx(1) * u_h.dx(0), test) * dx\n", " + dt * inner(kappa * grad(u_h), grad(test)) * dx)\n", "lhs, rhs = system(F)\n", "\n", "problem = LinearVariationalProblem(\n", " lhs, rhs, u_np1,\n", " constant_jacobian=True)\n", "solver = LinearVariationalSolver(\n", " problem, solver_parameters={\"ksp_type\": \"preonly\",\n", " \"pc_type\": \"lu\"})\n", "\n", "u_n.assign(u_0)\n", "for n in range(N):\n", " solver.solve()\n", " u_n.assign(u_np1)\n", "\n", "J = assemble(inner(u_n, u_n) * dx)\n", "\n", "\n", "def plot_output(u, title):\n", " r = (u.dat.data_ro.min(), u.dat.data_ro.max())\n", " eps = (r[1] - r[0]) * 1.0e-12\n", " p = tricontourf(u, np.linspace(r[0] - eps, r[1] + eps, 32))\n", " plt.gca().set_title(title)\n", " plt.colorbar(p)\n", " plt.gca().set_aspect(1.0)\n", "\n", "\n", "plot_output(u_0, title=\"$u_0$\")\n", "plot_output(u_n, title=\"$u_n$\")" ] }, { "cell_type": "markdown", "id": "e2b78d66", "metadata": {}, "source": [ "## Adding tlm_adjoint\n", "\n", "We first modify the code so that tlm_adjoint processes the calculations:" ] }, { "cell_type": "code", "execution_count": null, "id": "fd78c6fa", "metadata": {}, "outputs": [], "source": [ "from firedrake import *\n", "from tlm_adjoint.firedrake import *\n", "\n", "reset_manager(\"memory\", {})\n", "\n", "T = 0.1\n", "N = 100\n", "dt = Constant(T / N)\n", "\n", "mesh = UnitSquareMesh(128, 128)\n", "X = SpatialCoordinate(mesh)\n", "space = FunctionSpace(mesh, \"Lagrange\", 1)\n", "test = TestFunction(space)\n", "trial = TrialFunction(space)\n", "\n", "psi = Function(space, name=\"psi\")\n", "psi.interpolate(-sin(pi * X[0]) * sin(pi * X[1]))\n", "\n", "kappa = Constant(0.01)\n", "\n", "u_0 = Function(space, name=\"u_0\")\n", "u_0.interpolate(exp(-50.0 * ((X[0] - 0.75) ** 2 + (X[1] - 0.5) ** 2)))\n", "\n", "\n", "def forward(u_0, psi):\n", " u_n = Function(space, name=\"u_n\")\n", " u_np1 = Function(space, name=\"u_np1\")\n", "\n", " u_h = 0.5 * (u_n + trial)\n", " F = (inner(trial - u_n, test) * dx\n", " + dt * inner(psi.dx(0) * u_h.dx(1) - psi.dx(1) * u_h.dx(0), test) * dx\n", " + dt * inner(kappa * grad(u_h), grad(test)) * dx)\n", " lhs, rhs = system(F)\n", "\n", " problem = LinearVariationalProblem(\n", " lhs, rhs, u_np1,\n", " constant_jacobian=True)\n", " solver = LinearVariationalSolver(\n", " problem, solver_parameters={\"ksp_type\": \"preonly\",\n", " \"pc_type\": \"lu\"})\n", "\n", " u_n.assign(u_0)\n", " for n in range(N):\n", " solver.solve()\n", " u_n.assign(u_np1)\n", "\n", " J = Functional(name=\"J\")\n", " J.assign(inner(u_n, u_n) * dx)\n", " return J\n", "\n", "\n", "start_manager()\n", "J = forward(u_0, psi)\n", "stop_manager()" ] }, { "cell_type": "markdown", "id": "721fd3a1", "metadata": {}, "source": [ "Later we will configure a checkpointing schedule. Resetting the manager resets the record of forward equations but does not reset the checkpointing configuration, and so in this example whenever we reset the manager we also return it to the default checkpointing configuration with `reset_manager(\"memory\", {})`.\n", "\n", "## Computing derivatives using an adjoint\n", "\n", "The `compute_gradient` function can be used to compute derivatives using the adjoint method. Here we compute the derivative of the square of the $L^2$-norm of the final timestep solution, considered a function of the control defined by the initial condition `u_0` and stream function `psi`, with respect to this control:" ] }, { "cell_type": "code", "execution_count": null, "id": "0e5b3bba", "metadata": {}, "outputs": [], "source": [ "dJ_du_0, dJ_dpsi = compute_gradient(J, (u_0, psi))" ] }, { "cell_type": "markdown", "id": "98132cf9", "metadata": {}, "source": [ "As a simple check of the result, note that the solution to the (discretized) partial differential equation is unchanged by the addition of a constant to the stream function. Hence we expect the directional derivative with respect to the stream function, with direction equal to the unity valued function, to be zero. This is indeed found to be the case (except for roundoff errors):" ] }, { "cell_type": "code", "execution_count": null, "id": "b5069278", "metadata": {}, "outputs": [], "source": [ "one = Function(space, name=\"one\")\n", "one.interpolate(Constant(1.0))\n", "\n", "dJ_dpsi_one = var_inner(one, dJ_dpsi)\n", "\n", "print(f\"{dJ_dpsi_one=}\")\n", "\n", "assert abs(dJ_dpsi_one) < 1.0e-17" ] }, { "cell_type": "markdown", "id": "d0ecd1a6", "metadata": {}, "source": [ "## Computing Hessian information using an adjoint of a tangent-linear\n", "\n", "We next compute a Hessian action. Although the following calculation does work, it is inefficient – you may wish to skip forward to the optimized calculations.\n", "\n", "Here we compute a 'mixed' Hessian action, by defining a directional derivative with respect to the stream function, and then differentiating this with respect to the initial condition:" ] }, { "cell_type": "code", "execution_count": null, "id": "95483f48", "metadata": {}, "outputs": [], "source": [ "from firedrake import *\n", "from tlm_adjoint.firedrake import *\n", "\n", "reset_manager(\"memory\", {})\n", "\n", "T = 0.1\n", "N = 100\n", "dt = Constant(T / N)\n", "\n", "mesh = UnitSquareMesh(128, 128)\n", "X = SpatialCoordinate(mesh)\n", "space = FunctionSpace(mesh, \"Lagrange\", 1)\n", "test = TestFunction(space)\n", "trial = TrialFunction(space)\n", "\n", "psi = Function(space, name=\"psi\")\n", "psi.interpolate(-sin(pi * X[0]) * sin(pi * X[1]))\n", "\n", "kappa = Constant(0.01)\n", "\n", "u_0 = Function(space, name=\"u_0\")\n", "u_0.interpolate(exp(-50.0 * ((X[0] - 0.75) ** 2 + (X[1] - 0.5) ** 2)))\n", "\n", "\n", "def forward(u_0, psi):\n", " u_n = Function(space, name=\"u_n\")\n", " u_np1 = Function(space, name=\"u_np1\")\n", "\n", " u_h = 0.5 * (u_n + trial)\n", " F = (inner(trial - u_n, test) * dx\n", " + dt * inner(psi.dx(0) * u_h.dx(1) - psi.dx(1) * u_h.dx(0), test) * dx\n", " + dt * inner(kappa * grad(u_h), grad(test)) * dx)\n", " lhs, rhs = system(F)\n", "\n", " problem = LinearVariationalProblem(\n", " lhs, rhs, u_np1,\n", " constant_jacobian=True)\n", " solver = LinearVariationalSolver(\n", " problem, solver_parameters={\"ksp_type\": \"preonly\",\n", " \"pc_type\": \"lu\"})\n", "\n", " u_n.assign(u_0)\n", " for n in range(N):\n", " solver.solve()\n", " u_n.assign(u_np1)\n", "\n", " J = Functional(name=\"J\")\n", " J.assign(inner(u_n, u_n) * dx)\n", " return J\n", "\n", "\n", "zeta = Function(space, name=\"zeta\")\n", "zeta.assign(psi)\n", "configure_tlm((psi, zeta))\n", "\n", "start_manager()\n", "J = forward(u_0, psi)\n", "stop_manager()\n", "\n", "dJ_dpsi_zeta = var_tlm(J, (psi, zeta))\n", "\n", "d2J_dpsi_zeta_du_0 = compute_gradient(dJ_dpsi_zeta, u_0)" ] }, { "cell_type": "markdown", "id": "ddad9733", "metadata": {}, "source": [ "## Optimization\n", "\n", "In the above we have successfully built a record of calculations, and used this to compute derivative information. However there are two issues:\n", "\n", "1. Building the record has a noticable cost – the forward calculation has slowed down. In the second order calculation overheads associated with the tangent-linear lead to substantial additional costs.\n", "2. tlm_adjoint records the solution of the partial differential equation on all time levels. The memory usage here is manageable. However memory limits will be exceeded for larger problems with more fields, spatial degrees of freedom, or timesteps.\n", "\n", "Let's fix these issues in order.\n", "\n", "### Optimizing the annotation\n", "\n", "In the above code tlm_adjoint builds a new record for each finite element variational problem it encounters. Even though only one `LinearVariationalSolver` is instantiated, an `EquationSolver` record is instantiated on each call to the `solve` method. Building the record is sufficiently expensive that the forward calculation noticeably slows down, and this also leads to significant extra processing in the derivative calculations.\n", "\n", "Instead we can instantiate an `EquationSolver` directly, and reuse it. However if we do only that then the code will still be inefficient. A single `EquationSolver` will be used, but new linear solver data will be constructed each time its `solve` method is called. We need to also apply an optimization analogous to the `constant_jacobian=True` argument supplied to `LinearVariationalProblem`.\n", "\n", "A simple fix is to add `cache_jacobian=True` when instantiating the `EquationSolver`:\n", "\n", "```\n", "eq = EquationSolver(\n", " lhs == rhs, u_np1,\n", " solver_parameters={\"ksp_type\": \"preonly\",\n", " \"pc_type\": \"lu\"},\n", " cache_jacobian=True)\n", "```\n", "\n", "This works, but we can instead let tlm_adjoint detect that linear solver data can be cached. We can do that by adding `static=True` when instantiating variables whose value is unchanged throughout the forward calculation:" ] }, { "cell_type": "code", "execution_count": null, "id": "2856cf07", "metadata": {}, "outputs": [], "source": [ "from firedrake import *\n", "from tlm_adjoint.firedrake import *\n", "\n", "reset_manager(\"memory\", {})\n", "clear_caches()\n", "\n", "T = 0.1\n", "N = 100\n", "dt = Constant(T / N, static=True)\n", "\n", "mesh = UnitSquareMesh(128, 128)\n", "X = SpatialCoordinate(mesh)\n", "space = FunctionSpace(mesh, \"Lagrange\", 1)\n", "test = TestFunction(space)\n", "trial = TrialFunction(space)\n", "\n", "psi = Function(space, name=\"psi\", static=True)\n", "psi.interpolate(-sin(pi * X[0]) * sin(pi * X[1]))\n", "\n", "kappa = Constant(0.01, static=True)\n", "\n", "u_0 = Function(space, name=\"u_0\", static=True)\n", "u_0.interpolate(exp(-50.0 * ((X[0] - 0.75) ** 2 + (X[1] - 0.5) ** 2)))\n", "\n", "\n", "def forward(u_0, psi):\n", " u_n = Function(space, name=\"u_n\")\n", " u_np1 = Function(space, name=\"u_np1\")\n", "\n", " u_h = 0.5 * (u_n + trial)\n", " F = (inner(trial - u_n, test) * dx\n", " + dt * inner(psi.dx(0) * u_h.dx(1) - psi.dx(1) * u_h.dx(0), test) * dx\n", " + dt * inner(kappa * grad(u_h), grad(test)) * dx)\n", " lhs, rhs = system(F)\n", "\n", " eq = EquationSolver(\n", " lhs == rhs, u_np1,\n", " solver_parameters={\"ksp_type\": \"preonly\",\n", " \"pc_type\": \"lu\"})\n", "\n", " u_n.assign(u_0)\n", " for n in range(N):\n", " eq.solve()\n", " u_n.assign(u_np1)\n", "\n", " J = Functional(name=\"J\")\n", " J.assign(inner(u_n, u_n) * dx)\n", " return J\n", "\n", "\n", "start_manager()\n", "J = forward(u_0, psi)\n", "stop_manager()" ] }, { "cell_type": "markdown", "id": "6fe47e03", "metadata": {}, "source": [ "If we now query the relevant tlm_adjoint caches:" ] }, { "cell_type": "code", "execution_count": null, "id": "eb3638f2", "metadata": {}, "outputs": [], "source": [ "print(f\"{len(assembly_cache())=}\")\n", "print(f\"{len(linear_solver_cache())=}\")\n", "\n", "assert len(assembly_cache()) == 2\n", "assert len(linear_solver_cache()) == 1" ] }, { "cell_type": "markdown", "id": "fee57222", "metadata": {}, "source": [ "we find that linear solver data associated with a single matrix has been cached. We also find that two assembled objects have been cached – it turns out that there are two cached matrices. As well as caching the matrix associated with the left-hand-side of the discrete problem, a matrix associated with the *right-hand-side* has been assembled and cached. Assembly of the right-hand-side has been converted into a matrix multiply. If we wished we could disable right-hand-side optimizations by adding `cache_rhs_assembly=False`:\n", "\n", "```\n", "eq = EquationSolver(\n", " lhs == rhs, u_np1,\n", " solver_parameters={\"ksp_type\": \"preonly\",\n", " \"pc_type\": \"lu\"},\n", " cache_rhs_assembly=False)\n", "```\n", "\n", "### Using a checkpointing schedule\n", "\n", "To address the storage issue we enable checkpointing. Here we enable binomial checkpointing with storage of a maximum of $10$ forward restart checkpoints in memory:" ] }, { "cell_type": "code", "execution_count": null, "id": "20b7fb3f", "metadata": {}, "outputs": [], "source": [ "from firedrake import *\n", "from tlm_adjoint.firedrake import *\n", "\n", "import logging\n", "\n", "logger = logging.getLogger(\"tlm_adjoint\")\n", "logger.setLevel(logging.DEBUG)\n", "root_logger = logging.getLogger()\n", "if len(logger.handlers) == 1:\n", " if len(root_logger.handlers) == 1:\n", " root_logger.handlers.pop()\n", " root_logger.addHandler(logger.handlers.pop())\n", "\n", "reset_manager(\"memory\", {})\n", "clear_caches()\n", "\n", "T = 0.1\n", "N = 100\n", "dt = Constant(T / N, static=True)\n", "\n", "mesh = UnitSquareMesh(128, 128)\n", "X = SpatialCoordinate(mesh)\n", "space = FunctionSpace(mesh, \"Lagrange\", 1)\n", "test = TestFunction(space)\n", "trial = TrialFunction(space)\n", "\n", "psi = Function(space, name=\"psi\", static=True)\n", "psi.interpolate(-sin(pi * X[0]) * sin(pi * X[1]))\n", "\n", "kappa = Constant(0.01, static=True)\n", "\n", "u_0 = Function(space, name=\"u_0\", static=True)\n", "u_0.interpolate(exp(-50.0 * ((X[0] - 0.75) ** 2 + (X[1] - 0.5) ** 2)))\n", "\n", "\n", "def forward(u_0, psi):\n", " u_n = Function(space, name=\"u_n\")\n", " u_np1 = Function(space, name=\"u_np1\")\n", "\n", " u_h = 0.5 * (u_n + trial)\n", " F = (inner(trial - u_n, test) * dx\n", " + dt * inner(psi.dx(0) * u_h.dx(1) - psi.dx(1) * u_h.dx(0), test) * dx\n", " + dt * inner(kappa * grad(u_h), grad(test)) * dx)\n", " lhs, rhs = system(F)\n", "\n", " eq = EquationSolver(\n", " lhs == rhs, u_np1,\n", " solver_parameters={\"ksp_type\": \"preonly\",\n", " \"pc_type\": \"lu\"})\n", "\n", " u_n.assign(u_0)\n", " for n in range(N):\n", " eq.solve()\n", " u_n.assign(u_np1)\n", " if n < N - 1:\n", " new_block()\n", "\n", " J = Functional(name=\"J\")\n", " J.assign(inner(u_n, u_n) * dx)\n", " return J\n", "\n", "\n", "configure_checkpointing(\"multistage\", {\"snaps_in_ram\": 10, \"blocks\": N})\n", "start_manager()\n", "J = forward(u_0, psi)\n", "stop_manager()" ] }, { "cell_type": "markdown", "id": "c20a3dbb", "metadata": {}, "source": [ "The key changes here are:\n", "\n", "- Configuration of a checkpointing schedule using `configure_checkpointing`. Here binomial checkpointing is applied, with a maximum of $10$ forward restart checkpoints stored in memory, indicated using the `\"snaps_in_ram\"` parameter. The total number of steps is indicated using the `\"blocks\"` parameter.\n", "- The indication of the steps using `new_block()`.\n", "\n", "Extra logging output is also enabled so that we can see the details of the checkpointing schedule.\n", "\n", "### Computing derivatives\n", "\n", "We are now ready to compute derivatives. However a key restriction is that we can, with this checkpointing schedule, only perform the adjoint calculation *once* per forward calculation. We cannot call `compute_gradient` a second time, without first rerunning the entire forward calculation.\n", "\n", "In the following we compute both first and second derivative information using a single adjoint calculation:" ] }, { "cell_type": "code", "execution_count": null, "id": "33534595", "metadata": {}, "outputs": [], "source": [ "from firedrake import *\n", "from tlm_adjoint.firedrake import *\n", "\n", "import logging\n", "\n", "logger = logging.getLogger(\"tlm_adjoint\")\n", "logger.setLevel(logging.DEBUG)\n", "root_logger = logging.getLogger()\n", "if len(logger.handlers) == 1:\n", " if len(root_logger.handlers) == 1:\n", " root_logger.handlers.pop()\n", " root_logger.addHandler(logger.handlers.pop())\n", "\n", "reset_manager(\"memory\", {})\n", "clear_caches()\n", "\n", "T = 0.1\n", "N = 100\n", "dt = Constant(T / N, static=True)\n", "\n", "mesh = UnitSquareMesh(128, 128)\n", "X = SpatialCoordinate(mesh)\n", "space = FunctionSpace(mesh, \"Lagrange\", 1)\n", "test = TestFunction(space)\n", "trial = TrialFunction(space)\n", "\n", "psi = Function(space, name=\"psi\", static=True)\n", "psi.interpolate(-sin(pi * X[0]) * sin(pi * X[1]))\n", "\n", "kappa = Constant(0.01, static=True)\n", "\n", "u_0 = Function(space, name=\"u_0\", static=True)\n", "u_0.interpolate(exp(-50.0 * ((X[0] - 0.75) ** 2 + (X[1] - 0.5) ** 2)))\n", "\n", "\n", "def forward(u_0, psi):\n", " u_n = Function(space, name=\"u_n\")\n", " u_np1 = Function(space, name=\"u_np1\")\n", "\n", " u_h = 0.5 * (u_n + trial)\n", " F = (inner(trial - u_n, test) * dx\n", " + dt * inner(psi.dx(0) * u_h.dx(1) - psi.dx(1) * u_h.dx(0), test) * dx\n", " + dt * inner(kappa * grad(u_h), grad(test)) * dx)\n", " lhs, rhs = system(F)\n", "\n", " eq = EquationSolver(\n", " lhs == rhs, u_np1,\n", " solver_parameters={\"ksp_type\": \"preonly\",\n", " \"pc_type\": \"lu\"})\n", "\n", " u_n.assign(u_0)\n", " for n in range(N):\n", " eq.solve()\n", " u_n.assign(u_np1)\n", " if n < N - 1:\n", " new_block()\n", "\n", " J = Functional(name=\"J\")\n", " J.assign(inner(u_n, u_n) * dx)\n", " return J\n", "\n", "\n", "zeta_u_0 = ZeroFunction(space, name=\"zeta_u_0\")\n", "zeta_psi = Function(space, name=\"zeta_psi\", static=True)\n", "zeta_psi.assign(psi)\n", "configure_tlm(((u_0, psi), (zeta_u_0, zeta_psi)))\n", "\n", "configure_checkpointing(\"multistage\", {\"snaps_in_ram\": 10, \"blocks\": N})\n", "start_manager()\n", "J = forward(u_0, psi)\n", "stop_manager()\n", "\n", "dJ_dpsi_zeta = var_tlm(J, ((u_0, psi), (zeta_u_0, zeta_psi)))\n", "\n", "dJ_du_0, dJ_dpsi, d2J_dpsi_zeta_du_0 = compute_gradient(\n", " dJ_dpsi_zeta, (zeta_u_0, zeta_psi, u_0))" ] }, { "cell_type": "markdown", "id": "283156e2", "metadata": {}, "source": [ "The derivative calculation now alternates between forward + tangent-linear calculations, and adjoint calculations.\n", "\n", "If we wished we could perform higher order adjoint calculations, using a binomial checkpointing schedule, by supplying a higher order tangent-linear configuration and differentiating the result." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 5 }