.. _sdkruntime-api-reference: SdkRuntime API Reference ======================== This section presents the ``SdkRuntime`` Python host API reference and associated utilities to develop kernels for the Cerebras Wafer Scale Engine. SdkRuntime ---------- .. py:module:: cerebras.sdk.runtime.sdkruntimepybind Python API for ``SdkRuntime`` functions. .. py:class:: SdkRuntime(bindir: Union[pathlib.Path, str], **kwargs) :module: cerebras.sdk.runtime.sdkruntimepybind Bases: :class:`object` Manages the execution of SDK programs on the Cerebras Wafer Scale Engine (WSE) or simfabric. The constructor analyzes the WSE ELFs in the ``bindir`` and prepares the WSE or simfabric for a run. Requires CM IP address and port for WSE runs. :param bindir: Path to ELF files which is compiled by ``cslc``. The runtime collects the I/O and fabric parameters automatically, including height, width, number of channels, width of buffers,... etc. :type bindir: ``Union[pathlib.Path, str]`` :Keyword Arguments: **cmaddr** (``str``) -- ``'IP_ADDRESS:PORT'`` string of CM. Omit this ``kwarg`` to run on simfabric. **Example**: In the following example, an ``SdkRuntime`` runner object is instantiated. If ``args.cmaddr`` is non-empty, then the kernel code will run on the WSE pointed to by that address; otherwise, the kernel code will run on simfabric. The compiled kernel code in the directory ``args.name`` has exported symbols ``A`` and ``B`` pointing to arrays on the device. After loading the code and starting the run with ``load()`` and ``run()``, data on the host stored in ``data`` is copied to ``A`` on the device, and then ``B`` on the device is copied back into ``data`` on the host. .. code-block:: python runner = SdkRuntime(args.name, cmaddr=args.cmaddr) symbol_A = runner.get_id("A") symbol_B = runner.get_id("B") runner.load() runner.run() runner.memcpy_h2d(symbol_A, data, px, py, w, h, l, streaming=False, data_type=memcpy_dtype, order=memcpy_order, nonblock=False) runner.memcpy_d2h(data, symbol_B, px, py, w, h, l, streaming=False, data_type=memcpy_dtype, order=memcpy_order, nonblock=False) .. py:method:: call(symbol: str, params: numpy.ndarray, **kwargs) -> Task :module: cerebras.sdk.runtime.sdkruntimepybind Trigger a host-callable function defined in the kernel. :param symbol: The exported name of the symbol corresponding to a host-callable function. :type symbol: ``str`` :param params: Array of parameters taken as arguments to the host-callable function. The parameters must be 32-bit, and no more than fifteen parameters are supported. :type params: ``numpy.ndarray`` :Keyword Arguments: * **nonblock** (``bool``) -- Nonblocking if ``True``, blocking otherwise. :returns: * **task_handle** (``Task``) -- Handle to the task launched by ``call``. **Example**: Consider a kernel which defines a host-callable function ``fn_foo`` by: .. code-block:: csl comptime { @export_symbol(fn_foo); @rpc(LAUNCH); } The host calls ``fn_foo`` by ``runner.call("fn_foo", [], nonblock=False)``. .. py:method:: dump_core(corefile: str) :module: cerebras.sdk.runtime.sdkruntimepybind Dump the core of a simulator run, to be used for debugging with ``csdb``. Note that the specified name of the corefile MUST be "corefile.cs1" to use with ``csdb``, and this method can only be called after calling ``stop()``. :param corefile: Name of corefile. Must be "corefile.cs1" to use with ``csdb``. :type symbol: ``str`` .. py:method:: get_id(symbol: str) :module: cerebras.sdk.runtime.sdkruntimepybind Retrieve the integer representation of an exported symbol which is exported in the kernel. Possible symbols include a data tensor or a host-callable function. :param symbol: The exported name of the symbol. :type symbol: ``str`` .. py:method:: is_task_done(task_handle: Task) -> bool :module: cerebras.sdk.runtime.sdkruntimepybind Query if task ``task_handle`` is complete :param task_handle: Handle to a task previously launched by ``SdkRuntime``. :type task_handle: ``Task`` :returns: * **task_done** (``bool``) -- ``True`` if task is done, and ``False`` otherwise. .. py:method:: launch(symbol: str, *args, **kwargs) -> Task :module: cerebras.sdk.runtime.sdkruntimepybind Trigger a host-callable function defined in the kernel, with type checking for arguments. :param symbol: The exported name of the symbol corresponding to a host-callable function. :type symbol: ``str`` :Positional Arguments: * Matches the arguments of the host-callable function. ``launch`` will perform type checking on the arguments. :Keyword Arguments: * **nonblock** (``bool``) -- Nonblocking if ``True``, blocking otherwise. :returns: * **task_handle** (``Task``) -- Handle to the task launched by ``launch``. .. py:method:: load() :module: cerebras.sdk.runtime.sdkruntimepybind Load the binaries to simfabric or WSE. It may takes 80+ seconds to load the binaries onto the WSE. .. py:method:: memcpy_d2h(dest: numpy.ndarray, src: int, px: int, py: int, w: int, h: int, elem_per_pe: int, **kwargs) -> Task :module: cerebras.sdk.runtime.sdkruntimepybind Receive a host tensor to the device via either copy mode or streaming mode. The data is distributed into the region of interest (ROI) which is a bounding box starting at coordinate ``(px, py)`` with width ``w`` and height ``h``. :param dest: A 3-D host tensor ``A[h][w][l]``, wrapped in a 1-D array according to keyword argument ``order``. :type dest: ``numpy.ndarray`` :param src: A user-defined color if keyword argument ``streaming=True``, symbol of a device tensor otherwise. :type src: ``int`` :param px: ``x``-coordinate of start point of the ROI. :type px: ``int`` :param py: ``y``-coordinate of start point of the ROI. :type py: ``int`` :param w: Width of the ROI. :type w: ``int`` :param h: Height of the ROI. :type h: ``int`` :param elem_per_pe: Number of elements per PE. The data type of an element is 16-bit and 32-bit only. If the tensor has ``k`` elements per PE, ``elt_per_pe`` is ``k`` even if the data type is 16-bit. If the data type is 16-bit, the user has to extend the tensor to a 32-bit one, with zero filled in the higher 16 bits. :type elem_per_pe: ``int`` :Keyword Arguments: * **streaming** (``bool``) -- Streaming mode if ``True``, copy mode otherwise. * **data_type** (``MemcpyDataType``) -- 32-bit if ``MemcpyDataType.MEMCPY_32BIT`` or 16-bit if ``MemcpyDataType.MEMCPY_16BIT``. Note that this argument has no effect if ``streaming`` is ``True``, and the user must handle the data appropriately in the receiving wavelet-triggered task. Additionally, the underlying type of the tensor ``dest`` must be 32-bit. The tensor must be extended to a 32-bit one with zero filled in the higher 16 bits. * **order** (``MemcpyOrder``) -- Row-major if ``MemcpyOrder.ROW_MAJOR`` or column-major if ``MemcpyOrder.COL_MAJOR``. * **nonblock** (``bool``) -- Nonblocking if ``True``, blocking otherwise. :returns: * **task_handle** (``Task``) -- Handle to the task launched by ``memcpy_d2h``. .. py:method:: memcpy_h2d(dest: int, src: numpy.ndarray, px: int, py: int, w: int, h: int, elem_per_pe: int, **kwargs) -> Task :module: cerebras.sdk.runtime.sdkruntimepybind Send a host tensor to the device via either copy mode or streaming mode. The data is distributed into the region of interest (ROI) which is a bounding box starting at coordinate ``(px, py)`` with width ``w`` and height ``h``. :param dest: A user-defined color if keyword argument ``streaming=True``, symbol of a device tensor otherwise. :type dest: ``int`` :param src: A 3-D host tensor ``A[h][w][l]``, wrapped in a 1-D array according to parameter ``order``. :type src: ``numpy.ndarray`` :param px: ``x``-coordinate of start point of the ROI. :type px: ``int`` :param py: ``y``-coordinate of start point of the ROI. :type py: ``int`` :param w: Width of the ROI. :type w: ``int`` :param h: Height of the ROI. :type h: ``int`` :param elem_per_pe: Number of elements per PE. The data type of an element is 16-bit and 32-bit only. If the tensor has ``k`` elements per PE, ``elt_per_pe`` is ``k`` even if the data type is 16-bit. If the data type is 16-bit, the user has to extend the tensor to a 32-bit one, with zero filled in the higher 16 bits. :type elem_per_pe: ``int`` :Keyword Arguments: * **streaming** (``bool``) -- Streaming mode if ``True``, copy mode otherwise. * **data_type** (``MemcpyDataType``) -- 32-bit if ``MemcpyDataType.MEMCPY_32BIT`` or 16-bit if ``MemcpyDataType.MEMCPY_16BIT``. Note that this argument has no effect if ``streaming`` is ``True``, and the user must handle the data appropriately in the receiving wavelet-triggered task. Additionally, the underlying type of the tensor ``src`` must be 32-bit. The tensor must be extended to a 32-bit one with zero filled in the higher 16 bits. * **order** (``MemcpyOrder``) -- Row-major if ``MemcpyOrder.ROW_MAJOR`` or column-major if ``MemcpyOrder.COL_MAJOR``. * **nonblock** (``bool``) -- Nonblocking if ``True``, blocking otherwise. :returns: * **task_handle** (``Task``) -- Handle to the task launched by ``memcpy_h2d``. .. py:method:: run() :module: cerebras.sdk.runtime.sdkruntimepybind Start the simfabric or WSE run and wait for commands from the host runtime. .. py:method:: stop() :module: cerebras.sdk.runtime.sdkruntimepybind Wait for all pending commands (data transfers and kernel function calls) to complete and then stop simfabric or WSE. After this call is complete, no new commands will be accepted for this ``SdkRuntime`` object. ``stop`` must be called to end a program. Otherwise, the runtime will admit an error. .. py:method:: task_wait(task_handle: Task) :module: cerebras.sdk.runtime.sdkruntimepybind Wait for the task ``task_handle`` to complete. :param task_handle: Handle to a task previously launched by ``SdkRuntime``. :type task_handle: ``Task`` .. py:class:: MemcpyDataType :module: cerebras.sdk.runtime.sdkruntimepybind Bases: :class:`Enum` Specifies the data size for transfers using ``memcpy_d2h`` and ``memcpy_h2d`` copy mode. :Values: * **MEMCPY_16BIT** * **MEMCPY_32BIT** .. py:class:: MemcpyOrder :module: cerebras.sdk.runtime.sdkruntimepybind Bases: :class:`Enum` Specifies mapping of data for transfers using ``memcpy_d2h`` and ``memcpy_h2d``. :Values: * **ROW_MAJOR** * **COL_MAJOR** .. py:class:: Task :module: cerebras.sdk.runtime.sdkruntimepybind Handle to a task launched by ``SdkRuntime``. runtime_utils ------------- Utility functions for preparing input and output tensors. .. py:module:: cerebras.sdk.runtime.runtime_utils .. py:function:: convert_input_tensor(portmap: str, arr: numpy.ndarray) -> (int, int, int, int, int, numpy.ndarray) :module: cerebras.sdk.runtime.runtime_utils Given a portmap and array, prepare and return the args that should be passed into ``memcpy_h2d``. Note that this function is only compatible with ``order=ROW_MAJOR``. :param portmap: ISL portmap giving input mapping of array. :type portmap: ``str`` :param arr: Input array to be prepared for input data transfer. :type arr: ``numpy.ndarray`` :returns: **(px, py, w, h, elem_per_pe, mapped_arr)** * **px** (``int``) -- ``x``-coordinate of start point of the region of interest (ROI). * **py** (``int``) -- ``y``-coordinate of start point of the ROI. * **w** (``int``) -- Width of the ROI. * **h** (``int``) -- Height of the ROI. * **elem_per_pe** (``int``) -- Number of elements per PE. * **mapped_arr** (``numpy.ndarray``) -- A prepared input array for use with ``memcpy_h2d``. .. py:function:: format_output_tensor(portmap: str, datatype: type, flat_out_arr: numpy.ndarray) -> numpy.ndarray :module: cerebras.sdk.runtime.runtime_utils Given a portmap and unshuffled array filled by a ``memcpy_d2h`` call, prepare and return the shuffled data. Note that this function is only compatible with ``order=ROW_MAJOR``. :param portmap: ISL portmap giving output mapping of array. :type portmap: ``str`` :param datatype: Type of the data to be transferred. :type datatype: ``type`` :param flat_out_arr: Output array filled by ``memcpy_d2h``. :type flat_out_arr: ``numpy.ndarray`` :returns: * **output_arr** (``numpy.ndarray``) -- Formatted output array with the correct indexing as specified by ``portmap``. .. py:function:: prepare_output_tensor(portmap: str, datatype: type) -> (int, int, int, int, int, numpy.ndarray) :module: cerebras.sdk.runtime.runtime_utils Given a portmap and datatype, prepare and return the args that should be passed into ``memcpy_d2h``. Note that this function is only compatible with ``order=ROW_MAJOR``. :param portmap: ISL portmap giving output mapping of array. :type portmap: ``str`` :param datatype: Type of the data to be transferred. :type datatype: ``type`` :returns: **(px, py, w, h, elem_per_pe, mapped_arr)** * **px** (``int``) -- ``x``-coordinate of start point of the region of interest (ROI). * **py** (``int``) -- ``y``-coordinate of start point of the ROI. * **w** (``int``) -- Width of the ROI. * **h** (``int``) -- Height of the ROI. * **elem_per_pe** (``int``) -- Number of elements per PE. * **mapped_arr** (``numpy.ndarray``) -- A prepared output array for use with ``memcpy_d2h``. sdk_utils --------- Utility functions for common operations with ``SdkRuntime``. .. py:module:: cerebras.sdk.sdk_utils .. py:function:: memcpy_view(arr: numpy.ndarray, datatype: numpy.dtype) -> (numpy.ndarray.view) :module: cerebras.sdk.sdk_utils Returns a 32, 16 or 8 bit view of a 32 bit numpy array (only the lower 16 or 8 bits of each 32 bit word in the last two cases). :params arr: A numpy array with 4 bytes per element on which the numpy view will be created. :type arr: ``numpy.ndarray`` :param datatype: The numpy data type which should be used in the output view. The itemsize must be 1, 2, or 4 bytes. :type datatype: ``numpy.dtype`` :returns: * **output_view** (``numpy.ndarray.view``) -- Numpy view into ``arr`` with specified numpy data type. **Example**: ``memcpy_view`` simplifies the use of various precision data types when copying between host and device. Consider the following Python host code which creates a ``float16`` view into a numpy array. Note that this array *must* be 32-bit. The user can fill the array with ``float16`` data, and copy it to an array on the device with CSL data type ``f16``. .. code-block:: python x_symbol = runner.get_symbol('x') # This container array must be 32-bit x_container = np.zeros(N, dtype=np.uint32) x = sdk_utils.memcpy_view(x_container, np.float16) x.fill(0.5) runner.memcpy_h2d(x_symbol, x_container, 0, 0, 1, 1, N, streaming=False, data_type=MemcpyDataType.MEMCPY_16BIT, order=MemcpyOrder.ROW_MAJOR, nonblock=False) debug_util ---------- Utilities for parsing debug output and core files of a simulator run. .. py:module:: cerebras.sdk.debug.debug_util .. py:class:: debug_util(bindir: Union[pathlib.Path, str]) :module: cerebras.sdk.debug.debug_util Bases: :class:`object` Loads ELF files in ``bindir`` in order to dump symbols for debugging. The user does not need to export the symbols in the kernel. ``debug_util`` dumps the core and looks for the symbols in the ELFs. If the symbol at ``Px.y`` is not found in the corresponding ELF, ``debug_util`` emits an error. The most common errors are either: 1) a wrong coordinate passed in ``get_symbol()``, or 2) a correct coordinate, but the symbol has been removed due to compiler optimization. One can use ``readelf`` to check if the symbol exists or not. If not, the user can export the symbol in the kernel to keep the symbol in the ELF. The functionality of this class is only supported in the simulator. **Example**: .. code-block:: python from cerebras.sdk.debug.debug_util import debug_util # run the app # dirname is the path to ELFs simulator = SdkRuntime(dirname) simulator.load() simulator.run() ... simulator.stop() # retrieve symbols after the run debug_mod = debug_util(dirname) # assume the core rectangle starts at P4.1, the dimension is # width-by-height and we want to retrieve the symbol y for every PE core_offset_x = 4 core_offset_y = 1 for py in range(height): for px in range(width): t = debug_mod.get_symbol(core_offset_x+px, core_offset_y+py, 'y', np.float32) print(f"At (py, px) = {py, px}, symbol y = {t}") .. py:method:: get_symbol(col: int, row: int, symbol: str, dtype: numpy.dtype) -> numpy.ndarray Read the value of ``symbol`` of given type at given PE coordinates. Note that each call to this function scans the whole fabric, so prefer ``get_symbol_rect`` over calling this in a loop. :param px: ``x``-coordinate of the PE, indexed from the northwest corner of the entire fabric (NOT the program rectangle) :type px: ``int`` :param py: ``y``-coordinate of the PE, indexed from the northwest corner of the entire fabric (NOT the program rectangle) :type py: ``int`` :param symbol: Name of the symbol to be read. :type symbol: ``str`` :param dtype: Numpy data type of values contained by symbol. :type dtype: ``numpy.dtype`` :returns: * **output_arr** (``numpy.ndarray``) -- Numpy array of output values read at symbol. .. py:method:: get_symbol_rect(rectangle: Rectangle, symbol: str, dtype: numpy.dtype) -> numpy.ndarray Read the value of ``symbol`` of given type for a rectangle of PEs. :param rectangle: Rectangle specified as ``((col, row), (width, height))``, indexed from the northwest corner of the entire fabric (NOT the program rectangle) :type rectangle: ``Rectangle`` :param symbol: Name of the symbol to be read. :type symbol: ``str`` :param dtype: Numpy data type of values contained by symbol. :type dtype: ``numpy.dtype`` :returns: * **output_arr** (``numpy.ndarray``) -- Numpy array of output values read at symbol. The first two dimensions of the returned array are PE coordinates ``(column, row)`` relative to the rectangle. .. py:method:: read_trace(px: int, py: int, name: str) -> list Parse a CSL trace buffer with name ``name`` at the given PE coordinates. :param px: ``x``-coordinate of the PE, indexed from the northwest corner of the entire fabric (NOT the program rectangle) :type px: ``int`` :param py: ``y``-coordinate of the PE, indexed from the northwest corner of the entire fabric (NOT the program rectangle) :type py: ``int`` :param name: Name of the trace buffer to be read. :type name: ``str`` :returns: * **trace_output** (``list``) -- Heterogenous list of trace values. **Example**: Consider a device kernel which initializes a trace buffer with the CSL ``debug`` library and uses it to record values: .. code-block:: csl const debug_mod = @import_module("", .{.key = "my_trace", .buffer_size = 100}); fn foo() void { debug_mod.trace_timestamp(); debug_mod.trace_string("Bar"); debug_mod.trace_i16(1); } Then the trace can be read in the host code with: .. code-block:: python trace_output = debug_mod.read_trace(4, 1, 'my_trace') print(trace_output) If ``foo`` was executed only once, then ``trace_output`` will be a heterogenous list containing a timestamp, the string "Bar", and the number 1.