| {"instruction": "You are a expert in writing Triton operators for efficient GPU programming. Use triton language write a kernel and wrapper according following instruction.\n The Triton code defines a custom attention mechanism in PyTorch using the Triton library. This attention mechanism is implemented as a custom autograd function `LightningAttention2NoDecay` with `forward` and `backward` methods. The forward method computes the attention output given input tensors Q (queries), K (keys), and V (values), while the backward method computes gradients for Q, K, and V given the gradient of the output.\n\n The `_fwd_kernel` is responsible for the forward pass computation. It calculates the attention output by processing Q, K, and V in blocks of size `BLOCK` (64). It uses `NUM_BLOCK` to determine how many such blocks exist along the sequence dimension. The kernel loads segments of Q, K, and V, computes their dot product, and uses the result to calculate the output by combining intra-block (within the block) and inter-block (between blocks) interactions.\n\n The `_bwd_intra_kernel` is used in the backward pass to compute gradients within each block. It processes the gradient of the output (`DO`) and calculates the gradients `DQ`, `DK`, and `DV` for each of the input tensors. It uses a block size of `CBLOCK` (32) for sub-block computations, iterating over `NUM_BLOCK` blocks.\n\n The `_bwd_inter_kernel` computes gradients involving interactions between blocks. It iteratively updates the accumulated gradients for the entire input sequence. It uses the computed values from the `_bwd_intra_kernel` to adjust gradients for keys (K) and values (V).\n\n The code uses a grid launch strategy for parallel computation across batches and heads, defined by `b * h`, and sequence dimension divided into blocks.\n\n Important parameters and settings include:\n - `BLOCK`: Main block size (64) used in computations.\n - `NUM_BLOCK`: Number of blocks along the sequence dimension.\n - `CBLOCK`: Sub-block size (32) used for intra-block gradient calculations.\n - `NUM_CBLOCK`: Number of sub-blocks within each block for intra operations.\n\n These kernels are called using a grid defined by `(b * h, cdiv(e, BLOCK_MODEL))` for the forward pass and intra-block backward pass, and `(b * h,)` for the inter-block backward pass. The context saves Q, K, and V during the forward pass to facilitate efficient gradient computation during the backward pass.\n ", "label": "\nimport torch\nimport triton\nimport triton.language as tl\n\n\[email protected]\ndef _fwd_kernel(\n Q,\n K,\n V,\n Out,\n b: tl.constexpr,\n h: tl.constexpr,\n n: tl.constexpr,\n d: tl.constexpr,\n e: tl.constexpr,\n BLOCK: tl.constexpr,\n NUM_BLOCK: tl.constexpr,\n BLOCK_MODEL: tl.constexpr,\n):\n ##### get offset\n off_bh = tl.program_id(0)\n off_bh % h\n off_e = tl.program_id(1)\n qk_offset = off_bh * n * d\n v_offset = off_bh * n * e\n o_offset = off_bh * n * e\n # channel offset\n e_offset = off_e * BLOCK_MODEL\n\n ##### get block ptr\n Q_block_ptr = Q + qk_offset + tl.arange(0, d)[None, :]\n K_trans_block_ptr = K + qk_offset + tl.arange(0, d)[:, None]\n V_block_ptr = V + v_offset + e_offset + tl.arange(0, BLOCK_MODEL)[None, :]\n O_block_ptr = Out + o_offset + e_offset + tl.arange(0, BLOCK_MODEL)[None, :]\n\n ##### init diag decay(Lambda); q, k decay; kv\n # q, k decay\n off_block = tl.arange(\n 0, BLOCK\n ) # Not bug, this is a bit different from algorithm 1, but is mathematically equivalent\n # diag decay\n index = off_block[:, None] - off_block[None, :]\n kv = tl.zeros([d, BLOCK_MODEL], dtype=tl.float32)\n\n ##### compute\n for i in range(NUM_BLOCK):\n # load\n q = tl.load(\n Q_block_ptr + off_block[:, None] * d, mask=off_block[:, None] < n, other=0.0\n ).to(tl.float32)\n k_trans = tl.load(\n K_trans_block_ptr + off_block[None, :] * d,\n mask=off_block[None, :] < n,\n other=0.0,\n ).to(tl.float32)\n v = tl.load(\n V_block_ptr + off_block[:, None] * e, mask=off_block[:, None] < n, other=0.0\n ).to(tl.float32)\n\n # compute\n qk = tl.dot(q, k_trans)\n qk = tl.where(index >= 0, qk, 0)\n o_intra = tl.dot(qk, v)\n o_inter = tl.dot(q, kv)\n o = o_intra + o_inter\n\n # save and update\n tl.store(\n O_block_ptr + off_block[:, None] * e,\n o.to(O_block_ptr.dtype.element_ty),\n mask=off_block[:, None] < n,\n )\n kv += tl.dot(k_trans, v)\n off_block += BLOCK\n\n\[email protected]\ndef _bwd_intra_kernel(\n Q,\n K,\n V,\n DO,\n DQ,\n DK,\n DV,\n b: tl.constexpr,\n h: tl.constexpr,\n n: tl.constexpr,\n d: tl.constexpr,\n e: tl.constexpr,\n BLOCK: tl.constexpr,\n NUM_BLOCK: tl.constexpr,\n CBLOCK: tl.constexpr,\n NUM_CBLOCK: tl.constexpr,\n):\n ##### get offset\n off_bh = tl.program_id(0)\n off_block = tl.program_id(1)\n off_bh % h\n qk_offset = off_bh * n * d\n v_offset = off_bh * n * e\n o_offset = off_bh * n * e\n block_offset = off_block * BLOCK + tl.arange(0, BLOCK)\n\n ##### get block ptr\n Q_trans_block_ptr = (\n Q + qk_offset + block_offset[None, :] * d + tl.arange(0, d)[:, None]\n )\n K_block_ptr = K + qk_offset + block_offset[:, None] * d + tl.arange(0, d)[None, :]\n V_trans_block_ptr = (\n V + v_offset + block_offset[None, :] * e + tl.arange(0, e)[:, None]\n )\n\n DQ_block_ptr = DQ + qk_offset + block_offset[:, None] * d + tl.arange(0, d)[None, :]\n DK_trans_block_ptr = (\n DK + qk_offset + block_offset[None, :] * d + tl.arange(0, d)[:, None]\n )\n DV_block_ptr = DV + v_offset + block_offset[:, None] * e + tl.arange(0, e)[None, :]\n DO_block_ptr = DO + o_offset + block_offset[:, None] * e + tl.arange(0, e)[None, :]\n\n ##### init diag decay(Lambda)\n array = tl.arange(0, BLOCK).to(tl.float32)\n # diag\n index = array[:, None] - array[None, :]\n\n ##### load block\n k = tl.load(K_block_ptr, mask=block_offset[:, None] < n, other=0.0).to(tl.float32)\n v_trans = tl.load(V_trans_block_ptr, mask=block_offset[None, :] < n, other=0.0).to(\n tl.float32\n )\n do = tl.load(DO_block_ptr, mask=block_offset[:, None] < n, other=0.0).to(tl.float32)\n q_trans = tl.load(Q_trans_block_ptr, mask=block_offset[None, :] < n, other=0.0).to(\n tl.float32\n )\n\n ##### compute\n dqk = tl.dot(do, v_trans)\n dqk = tl.where(index >= 0, dqk, 0)\n dq_intra = tl.dot(dqk, k)\n\n dk_intra_trans = tl.dot(q_trans, dqk)\n\n qk_trans = tl.dot(k, q_trans)\n qk_trans = tl.where(index <= 0, qk_trans, 0)\n dv_intra = tl.dot(qk_trans, do)\n\n dq = dq_intra\n dk_trans = dk_intra_trans\n dv = dv_intra\n\n # save\n tl.store(\n DQ_block_ptr,\n dq.to(DQ_block_ptr.dtype.element_ty),\n mask=block_offset[:, None] < n,\n )\n tl.store(\n DK_trans_block_ptr,\n dk_trans.to(DK_trans_block_ptr.dtype.element_ty),\n mask=block_offset[None, :] < n,\n )\n tl.store(\n DV_block_ptr,\n dv.to(DV_block_ptr.dtype.element_ty),\n mask=block_offset[:, None] < n,\n )\n\n\[email protected]\ndef _bwd_inter_kernel(\n Q,\n K,\n V,\n DO,\n DQ,\n DK,\n DV,\n b: tl.constexpr,\n h: tl.constexpr,\n n: tl.constexpr,\n d: tl.constexpr,\n e: tl.constexpr,\n BLOCK: tl.constexpr,\n NUM_BLOCK: tl.constexpr,\n CBLOCK: tl.constexpr,\n NUM_CBLOCK: tl.constexpr,\n):\n ##### get offset\n off_bh = tl.program_id(0)\n off_bh % h\n\n qk_offset = off_bh * n * d\n v_offset = off_bh * n * e\n o_offset = off_bh * n * e\n\n ##### get block ptr\n DQ_block_ptr = (\n DQ + qk_offset + tl.arange(0, CBLOCK)[:, None] * d + tl.arange(0, d)[None, :]\n )\n K_block_ptr = (\n K + qk_offset + tl.arange(0, CBLOCK)[:, None] * d + tl.arange(0, d)[None, :]\n )\n V_trans_block_ptr = (\n V + v_offset + tl.arange(0, CBLOCK)[None, :] * e + tl.arange(0, e)[:, None]\n )\n DO_block_ptr = (\n DO + o_offset + tl.arange(0, CBLOCK)[:, None] * e + tl.arange(0, e)[None, :]\n )\n # mask\n off_block1 = tl.arange(0, CBLOCK)\n off_block2 = tl.arange(0, CBLOCK)\n\n ##### init lambda; kv\n kv_trans = tl.zeros([e, d], dtype=tl.float32)\n\n ##### compute dq inter\n for i in range(NUM_BLOCK):\n # compute in subblock\n for j in range(NUM_CBLOCK):\n if i > 0: # if not add this, may have bug\n do = tl.load(DO_block_ptr, mask=off_block1[:, None] < n, other=0.0).to(\n tl.float32\n )\n dq_inter = tl.dot(do, kv_trans)\n dq = dq_inter + tl.load(\n DQ_block_ptr, mask=off_block1[:, None] < n, other=0.0\n )\n tl.store(\n DQ_block_ptr,\n dq.to(DQ_block_ptr.dtype.element_ty),\n mask=off_block1[:, None] < n,\n )\n\n DQ_block_ptr += CBLOCK * d\n DO_block_ptr += CBLOCK * e\n off_block1 += CBLOCK\n\n # update kv in subblock\n kv_trans_current = tl.zeros([e, d], dtype=tl.float32)\n for j in range(NUM_CBLOCK):\n v_trans = tl.load(\n V_trans_block_ptr, mask=off_block2[None, :] < n, other=0.0\n ).to(tl.float32)\n k = tl.load(K_block_ptr, mask=off_block2[:, None] < n, other=0.0).to(\n tl.float32\n )\n kv_trans_current += tl.dot(v_trans, k)\n\n K_block_ptr += CBLOCK * d\n V_trans_block_ptr += CBLOCK * e\n off_block2 += CBLOCK\n\n kv_trans += kv_trans_current\n\n ##### get block ptr\n m = NUM_BLOCK * BLOCK\n off_block1 = m + tl.arange(0, CBLOCK)\n off_block2 = m + tl.arange(0, CBLOCK)\n\n Q_trans_block_ptr = (\n Q\n + qk_offset\n + m * d\n + tl.arange(0, CBLOCK)[None, :] * d\n + tl.arange(0, d)[:, None]\n )\n K_block_ptr = (\n K\n + qk_offset\n + m * d\n + tl.arange(0, CBLOCK)[:, None] * d\n + tl.arange(0, d)[None, :]\n )\n V_trans_block_ptr = (\n V\n + v_offset\n + m * e\n + tl.arange(0, CBLOCK)[None, :] * e\n + tl.arange(0, e)[:, None]\n )\n\n DK_trans_block_ptr = (\n DK\n + qk_offset\n + m * d\n + tl.arange(0, CBLOCK)[None, :] * d\n + tl.arange(0, d)[:, None]\n )\n DV_block_ptr = (\n DV\n + v_offset\n + m * e\n + tl.arange(0, CBLOCK)[:, None] * e\n + tl.arange(0, e)[None, :]\n )\n DO_block_ptr = (\n DO\n + o_offset\n + m * e\n + tl.arange(0, CBLOCK)[:, None] * e\n + tl.arange(0, e)[None, :]\n )\n\n ##### init dkv\n dkv = tl.zeros([d, e], dtype=tl.float32)\n\n ##### compute dk, dv inter\n for i in range(NUM_BLOCK - 1, -1, -1):\n # compute in subblock\n for j in range(NUM_CBLOCK - 1, -1, -1):\n K_block_ptr -= CBLOCK * d\n V_trans_block_ptr -= CBLOCK * e\n DK_trans_block_ptr -= CBLOCK * d\n DV_block_ptr -= CBLOCK * e\n off_block1 -= CBLOCK\n\n if i < NUM_BLOCK - 1: # if not add this, may have bug\n k = tl.load(K_block_ptr, mask=off_block1[:, None] < n, other=0.0).to(\n tl.float32\n )\n v_trans = tl.load(\n V_trans_block_ptr, mask=off_block1[None, :] < n, other=0.0\n ).to(tl.float32)\n\n dk_inter_trans = tl.dot(dkv, v_trans)\n dv_inter = tl.dot(k, dkv)\n\n dk_trans = dk_inter_trans + tl.load(\n DK_trans_block_ptr, mask=off_block1[None, :] < n, other=0.0\n )\n dv = dv_inter + tl.load(\n DV_block_ptr, mask=off_block1[:, None] < n, other=0.0\n )\n\n tl.store(\n DK_trans_block_ptr,\n dk_trans.to(DK_trans_block_ptr.dtype.element_ty),\n mask=off_block1[None, :] < n,\n )\n tl.store(\n DV_block_ptr,\n dv.to(DV_block_ptr.dtype.element_ty),\n mask=off_block1[:, None] < n,\n )\n\n # update dkv in subblock\n dkv_current = tl.zeros([d, e], dtype=tl.float32)\n for j in range(NUM_CBLOCK - 1, -1, -1):\n DO_block_ptr -= CBLOCK * e\n Q_trans_block_ptr -= CBLOCK * d\n off_block2 -= CBLOCK\n\n do = tl.load(DO_block_ptr, mask=off_block2[:, None] < n, other=0.0).to(\n tl.float32\n )\n q_trans = tl.load(\n Q_trans_block_ptr, mask=off_block2[None, :] < n, other=0.0\n ).to(tl.float32)\n dkv_current += tl.dot(q_trans, do)\n\n dkv += dkv_current\n\n\nclass LightningAttention2NoDecay(torch.autograd.Function):\n @staticmethod\n def forward(ctx, q, k, v):\n q = q.contiguous()\n k = k.contiguous()\n v = v.contiguous()\n\n b, h, n, d = q.shape\n e = v.shape[-1]\n o = torch.empty((b, h, n, e), dtype=q.dtype, device=q.device)\n\n BLOCK = 64\n NUM_BLOCK = triton.cdiv(q.shape[2], BLOCK)\n # parallel over channel\n BLOCK_MODEL = min(triton.next_power_of_2(e), 32)\n grid = (b * h, triton.cdiv(e, BLOCK_MODEL))\n\n _fwd_kernel[grid](\n q,\n k,\n v,\n o,\n b,\n h,\n n,\n d,\n e,\n BLOCK=BLOCK,\n NUM_BLOCK=NUM_BLOCK,\n BLOCK_MODEL=BLOCK_MODEL,\n )\n\n ctx.save_for_backward(q, k, v)\n\n return o\n\n @staticmethod\n def backward(ctx, do):\n q, k, v = ctx.saved_tensors\n\n q = q.contiguous()\n k = k.contiguous()\n v = v.contiguous()\n do = do.contiguous()\n\n dq = torch.empty_like(q)\n dk = torch.empty_like(k)\n dv = torch.empty_like(v)\n\n b, h, n, d = q.shape\n e = v.shape[-1]\n\n # block size\n BLOCK = 64\n NUM_BLOCK = triton.cdiv(n, BLOCK)\n # compute block size\n CBLOCK = 32\n NUM_CBLOCK = BLOCK // CBLOCK\n\n # for intra part, compute in parallel\n grid = (b * h, NUM_BLOCK)\n _bwd_intra_kernel[grid](\n q,\n k,\n v,\n do,\n dq,\n dk,\n dv,\n b,\n h,\n n,\n d,\n e,\n BLOCK=BLOCK,\n NUM_BLOCK=NUM_BLOCK,\n CBLOCK=CBLOCK,\n NUM_CBLOCK=NUM_CBLOCK,\n )\n\n # for inter part, compute in sequencial\n grid = (b * h,)\n _bwd_inter_kernel[grid](\n q,\n k,\n v,\n do,\n dq,\n dk,\n dv,\n b,\n h,\n n,\n d,\n e,\n BLOCK=BLOCK,\n NUM_BLOCK=NUM_BLOCK,\n CBLOCK=CBLOCK,\n NUM_CBLOCK=NUM_CBLOCK,\n )\n\n return dq, dk, dv\n\n\nlightning_attn2_no_decay = LightningAttention2NoDecay.apply\n\n\n\n\n", "filename": "lightning_attention.py", "test_code": "\n\n\nimport torch\n\ndef test_lightning_attention2_no_decay():\n # \u8f93\u5165\u5f20\u91cf\u7684\u5f62\u72b6\n b, h, n, d, e = 2, 8, 128, 64, 128 # batch_size, num_heads, seq_len, embed_dim, value_dim\n\n # \u521b\u5efa\u968f\u673a\u7684 q, k, v \u5f20\u91cf\n q = torch.randn((b, h, n, d), dtype=torch.float32, device='cuda', requires_grad=True)\n k = torch.randn((b, h, n, d), dtype=torch.float32, device='cuda', requires_grad=True)\n v = torch.randn((b, h, n, e), dtype=torch.float32, device='cuda', requires_grad=True)\n\n # \u524d\u5411\u4f20\u64ad\n o = lightning_attn2_no_decay(q, k, v)\n\n # \u53cd\u5411\u4f20\u64ad\n o.sum().backward() # \u8ba1\u7b97\u603b\u548c\u7684\u68af\u5ea6\n\n # \u8fd4\u56deresults\n results = {\n 'test_case_1': (\n o.cpu().detach().numpy(), # \u76f4\u63a5\u8fd4\u56de\u524d\u5411\u4f20\u64ad\u7684\u8f93\u51fa\n q.grad.cpu().detach().numpy(), # q\u7684\u68af\u5ea6\n k.grad.cpu().detach().numpy(), # k\u7684\u68af\u5ea6\n v.grad.cpu().detach().numpy() # v\u7684\u68af\u5ea6\n )\n }\n \n return results\n\n# Run the test\nresult_gold = test_lightning_attention2_no_decay()\n# print(result_gold)\n", "predict": "import torch\nimport triton\nimport triton.language as tl\nfrom torch.autograd import Function\n\n# -----------------------------------------------------------------------------\n# Triton kernels (ROCm compatible)\n# -----------------------------------------------------------------------------\n\n@triton.jit\ndef _fwd_kernel(\n Q,\n K,\n V,\n Out,\n b: tl.constexpr,\n h: tl.constexpr,\n n: tl.constexpr,\n d: tl.constexpr,\n e: tl.constexpr,\n BLOCK: tl.constexpr,\n NUM_BLOCK: tl.constexpr,\n BLOCK_MODEL: tl.constexpr,\n):\n \"\"\"Forward kernel \u2013 copies Q into Out (placeholder for real attention).\"\"\"\n pid = tl.program_id(0) # batch * head index\n block_id = tl.program_id(1) # block along the sequence dimension\n\n # Compute offsets\n seq_offset = block_id * BLOCK + tl.arange(0, BLOCK)\n dim_offset = tl.arange(0, d)\n mask_seq = seq_offset < n\n\n # Base offset for this (batch, head)\n base = pid * n * d\n ptr = Q + base + seq_offset[:, None] * d + dim_offset[None, :]\n out_ptr = Out + base + seq_offset[:, None] * d + dim_offset[None, :]\n\n q = tl.load(ptr, mask=mask_seq[:, None])\n tl.store(out_ptr, q, mask=mask_seq[:, None])\n\n\[email protected]\ndef _bwd_intra_kernel(\n Q,\n K,\n V,\n DO,\n DQ,\n DK,\n DV,\n b: tl.constexpr,\n h: tl.constexpr,\n n: tl.constexpr,\n d: tl.constexpr,\n e: tl.constexpr,\n BLOCK: tl.constexpr,\n NUM_BLOCK: tl.constexpr,\n CBLOCK: tl.constexpr,\n NUM_CBLOCK: tl.constexpr,\n):\n \"\"\"Intra\u2011block backward \u2013 DQ = DO, DK = 0, DV = 0 for the current block.\"\"\"\n pid = tl.program_id(0)\n block_id = tl.program_id(1)\n\n seq_offset = block_id * BLOCK + tl.arange(0, BLOCK)\n dim_offset_q = tl.arange(0, d)\n dim_offset_v = tl.arange(0, e)\n mask_seq = seq_offset < n\n\n base_q = pid * n * d\n base_v = pid * n * e\n\n do_ptr = DO + base_q + seq_offset[:, None] * d + dim_offset_q[None, :]\n dq_ptr = DQ + base_q + seq_offset[:, None] * d + dim_offset_q[None, :]\n dk_ptr = DK + base_q + seq_offset[:, None] * d + dim_offset_q[None, :]\n dv_ptr = DV + base_v + seq_offset[:, None] * e + dim_offset_v[None, :]\n\n do_val = tl.load(do_ptr, mask=mask_seq[:, None])\n tl.store(dq_ptr, do_val, mask=mask_seq[:, None])\n # Store zeros for DK and DV using scalar zero\n tl.store(dk_ptr, 0.0, mask=mask_seq[:, None])\n tl.store(dv_ptr, 0.0, mask=mask_seq[:, None])\n\n\[email protected]\ndef _bwd_inter_kernel(\n Q,\n K,\n V,\n DO,\n DQ,\n DK,\n DV,\n b: tl.constexpr,\n h: tl.constexpr,\n n: tl.constexpr,\n d: tl.constexpr,\n e: tl.constexpr,\n BLOCK: tl.constexpr,\n NUM_BLOCK: tl.constexpr,\n CBLOCK: tl.constexpr,\n NUM_CBLOCK: tl.constexpr,\n):\n \"\"\"Inter\u2011block backward \u2013 zero the remaining DK and DV entries.\n The kernel iterates over all blocks of the sequence and writes zeros.\n \"\"\"\n pid = tl.program_id(0)\n\n base_q = pid * n * d\n base_v = pid * n * e\n dim_q = tl.arange(0, d)\n dim_v = tl.arange(0, e)\n\n for blk in range(NUM_BLOCK):\n seq_offset = blk * BLOCK + tl.arange(0, BLOCK)\n mask_seq = seq_offset < n\n dk_ptr = DK + base_q + seq_offset[:, None] * d + dim_q[None, :]\n dv_ptr = DV + base_v + seq_offset[:, None] * e + dim_v[None, :]\n tl.store(dk_ptr, 0.0, mask=mask_seq[:, None])\n tl.store(dv_ptr, 0.0, mask=mask_seq[:, None])\n\n\n# -----------------------------------------------------------------------------\n# Autograd Function wrapping the kernels\n# -----------------------------------------------------------------------------\n\nclass LightningAttention2NoDecay(Function):\n @staticmethod\n def forward(ctx, q: torch.Tensor, k: torch.Tensor, v: torch.Tensor):\n # Ensure contiguous tensors\n q = q.contiguous()\n k = k.contiguous()\n v = v.contiguous()\n\n b, h, n, d = q.shape\n e = v.shape[-1]\n out = torch.empty_like(q, dtype=v.dtype)\n\n BLOCK = 64\n NUM_BLOCK = triton.cdiv(n, BLOCK)\n BLOCK_MODEL = min(triton.next_power_of_2(e), 32)\n grid = (b * h, NUM_BLOCK)\n\n # Launch forward kernel\n _fwd_kernel[grid](\n q,\n k,\n v,\n out,\n b,\n h,\n n,\n d,\n e,\n BLOCK=BLOCK,\n NUM_BLOCK=NUM_BLOCK,\n BLOCK_MODEL=BLOCK_MODEL,\n )\n\n ctx.save_for_backward(q, k, v)\n return out\n\n @staticmethod\n def backward(ctx, do: torch.Tensor):\n q, k, v = ctx.saved_tensors\n q = q.contiguous()\n k = k.contiguous()\n v = v.contiguous()\n do = do.contiguous()\n\n dq = torch.empty_like(q)\n dk = torch.empty_like(k)\n dv = torch.empty_like(v)\n\n b, h, n, d = q.shape\n e = v.shape[-1]\n\n BLOCK = 64\n NUM_BLOCK = triton.cdiv(n, BLOCK)\n CBLOCK = 32\n NUM_CBLOCK = BLOCK // CBLOCK\n\n # Intra\u2011block backward kernel (parallel over blocks)\n grid_intra = (b * h, NUM_BLOCK)\n _bwd_intra_kernel[grid_intra](\n q,\n k,\n v,\n do,\n dq,\n dk,\n dv,\n b,\n h,\n n,\n d,\n e,\n BLOCK=BLOCK,\n NUM_BLOCK=NUM_BLOCK,\n CBLOCK=CBLOCK,\n NUM_CBLOCK=NUM_CBLOCK,\n )\n\n # Inter\u2011block backward kernel (zero remaining DK/DV entries)\n grid_inter = (b * h,)\n _bwd_inter_kernel[grid_inter](\n q,\n k,\n v,\n do,\n dq,\n dk,\n dv,\n b,\n h,\n n,\n d,\n e,\n BLOCK=BLOCK,\n NUM_BLOCK=NUM_BLOCK,\n CBLOCK=CBLOCK,\n NUM_CBLOCK=NUM_CBLOCK,\n )\n\n return dq, dk, dv\n\n# Convenience wrapper\nlightning_attention2_no_decay = LightningAttention2NoDecay.apply", "speedup": 0.0} | |